00:00:00.001 Started by upstream project "autotest-per-patch" build number 130568 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.297 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.009 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.024 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.038 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:07.038 > git config core.sparsecheckout # timeout=10 00:00:07.053 > git read-tree -mu HEAD # timeout=10 00:00:07.071 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:07.093 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:07.093 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:07.172 [Pipeline] Start of Pipeline 00:00:07.186 [Pipeline] library 00:00:07.188 Loading library shm_lib@master 00:00:07.188 Library shm_lib@master is cached. Copying from home. 00:00:07.206 [Pipeline] node 00:00:07.214 Running on CYP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.215 [Pipeline] { 00:00:07.224 [Pipeline] catchError 00:00:07.225 [Pipeline] { 00:00:07.235 [Pipeline] wrap 00:00:07.243 [Pipeline] { 00:00:07.248 [Pipeline] stage 00:00:07.249 [Pipeline] { (Prologue) 00:00:07.432 [Pipeline] sh 00:00:07.743 + logger -p user.info -t JENKINS-CI 00:00:07.762 [Pipeline] echo 00:00:07.763 Node: CYP6 00:00:07.771 [Pipeline] sh 00:00:08.078 [Pipeline] setCustomBuildProperty 00:00:08.087 [Pipeline] echo 00:00:08.089 Cleanup processes 00:00:08.097 [Pipeline] sh 00:00:08.396 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.396 2389727 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.412 [Pipeline] sh 00:00:08.700 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.700 ++ grep -v 'sudo pgrep' 00:00:08.700 ++ awk '{print $1}' 00:00:08.700 + sudo kill -9 00:00:08.700 + true 00:00:08.715 [Pipeline] cleanWs 00:00:08.724 [WS-CLEANUP] Deleting project workspace... 00:00:08.724 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.730 [WS-CLEANUP] done 00:00:08.734 [Pipeline] setCustomBuildProperty 00:00:08.750 [Pipeline] sh 00:00:09.036 + sudo git config --global --replace-all safe.directory '*' 00:00:09.154 [Pipeline] httpRequest 00:00:10.078 [Pipeline] echo 00:00:10.080 Sorcerer 10.211.164.101 is alive 00:00:10.092 [Pipeline] retry 00:00:10.094 [Pipeline] { 00:00:10.110 [Pipeline] httpRequest 00:00:10.115 HttpMethod: GET 00:00:10.115 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.116 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.141 Response Code: HTTP/1.1 200 OK 00:00:10.141 Success: Status code 200 is in the accepted range: 200,404 00:00:10.142 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:34.505 [Pipeline] } 00:00:34.523 [Pipeline] // retry 00:00:34.531 [Pipeline] sh 00:00:34.819 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:34.837 [Pipeline] httpRequest 00:00:35.339 [Pipeline] echo 00:00:35.341 Sorcerer 10.211.164.101 is alive 00:00:35.352 [Pipeline] retry 00:00:35.354 [Pipeline] { 00:00:35.370 [Pipeline] httpRequest 00:00:35.375 HttpMethod: GET 00:00:35.376 URL: http://10.211.164.101/packages/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:35.377 Sending request to url: http://10.211.164.101/packages/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:00:35.393 Response Code: HTTP/1.1 200 OK 00:00:35.393 Success: Status code 200 is in the accepted range: 200,404 00:00:35.393 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:01:16.940 [Pipeline] } 00:01:16.960 [Pipeline] // retry 00:01:16.969 [Pipeline] sh 00:01:17.257 + tar --no-same-owner -xf spdk_1c027d3563632a047e728d198e6a99b59e27c669.tar.gz 00:01:20.567 [Pipeline] sh 00:01:20.855 + git -C spdk log --oneline -n5 00:01:20.855 1c027d356 bdev_xnvme: add support for dataset management 00:01:20.855 447520417 xnvme: bump to 0.7.5 00:01:20.855 e9b861378 lib/iscsi: Fix: Unregister logout timer 00:01:20.855 081f43f2b lib/nvmf: Fix memory leak in nvmf_bdev_ctrlr_unmap 00:01:20.855 daeaec816 test/unit: remove unneeded MOCKs from ftl unit tests 00:01:20.867 [Pipeline] } 00:01:20.883 [Pipeline] // stage 00:01:20.894 [Pipeline] stage 00:01:20.897 [Pipeline] { (Prepare) 00:01:20.918 [Pipeline] writeFile 00:01:20.936 [Pipeline] sh 00:01:21.225 + logger -p user.info -t JENKINS-CI 00:01:21.240 [Pipeline] sh 00:01:21.532 + logger -p user.info -t JENKINS-CI 00:01:21.547 [Pipeline] sh 00:01:21.835 + cat autorun-spdk.conf 00:01:21.835 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.835 SPDK_TEST_NVMF=1 00:01:21.835 SPDK_TEST_NVME_CLI=1 00:01:21.835 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.835 SPDK_TEST_NVMF_NICS=e810 00:01:21.835 SPDK_TEST_VFIOUSER=1 00:01:21.835 SPDK_RUN_UBSAN=1 00:01:21.835 NET_TYPE=phy 00:01:21.844 RUN_NIGHTLY=0 00:01:21.849 [Pipeline] readFile 00:01:21.877 [Pipeline] withEnv 00:01:21.879 [Pipeline] { 00:01:21.895 [Pipeline] sh 00:01:22.185 + set -ex 00:01:22.185 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.185 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.185 ++ SPDK_TEST_NVMF=1 00:01:22.185 ++ SPDK_TEST_NVME_CLI=1 00:01:22.185 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.185 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.185 ++ SPDK_TEST_VFIOUSER=1 00:01:22.185 ++ SPDK_RUN_UBSAN=1 00:01:22.185 ++ NET_TYPE=phy 00:01:22.185 ++ RUN_NIGHTLY=0 00:01:22.185 + case $SPDK_TEST_NVMF_NICS in 00:01:22.185 + DRIVERS=ice 00:01:22.185 + [[ tcp == \r\d\m\a ]] 00:01:22.185 + [[ -n ice ]] 00:01:22.185 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.185 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.395 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.395 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.395 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.395 + true 00:01:30.395 + for D in $DRIVERS 00:01:30.395 + sudo modprobe ice 00:01:30.395 + exit 0 00:01:30.405 [Pipeline] } 00:01:30.420 [Pipeline] // withEnv 00:01:30.425 [Pipeline] } 00:01:30.437 [Pipeline] // stage 00:01:30.446 [Pipeline] catchError 00:01:30.447 [Pipeline] { 00:01:30.461 [Pipeline] timeout 00:01:30.461 Timeout set to expire in 1 hr 0 min 00:01:30.463 [Pipeline] { 00:01:30.477 [Pipeline] stage 00:01:30.479 [Pipeline] { (Tests) 00:01:30.493 [Pipeline] sh 00:01:30.784 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.784 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.784 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.784 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.784 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.784 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.784 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.784 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.784 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.784 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.784 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.784 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.784 + source /etc/os-release 00:01:30.784 ++ NAME='Fedora Linux' 00:01:30.784 ++ VERSION='39 (Cloud Edition)' 00:01:30.784 ++ ID=fedora 00:01:30.784 ++ VERSION_ID=39 00:01:30.784 ++ VERSION_CODENAME= 00:01:30.784 ++ PLATFORM_ID=platform:f39 00:01:30.784 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.784 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.784 ++ LOGO=fedora-logo-icon 00:01:30.784 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.784 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.784 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.784 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.784 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.784 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.784 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.784 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.784 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.784 ++ SUPPORT_END=2024-11-12 00:01:30.784 ++ VARIANT='Cloud Edition' 00:01:30.784 ++ VARIANT_ID=cloud 00:01:30.784 + uname -a 00:01:30.784 Linux spdk-cyp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.784 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:34.087 Hugepages 00:01:34.087 node hugesize free / total 00:01:34.087 node0 1048576kB 0 / 0 00:01:34.087 node0 2048kB 0 / 0 00:01:34.087 node1 1048576kB 0 / 0 00:01:34.087 node1 2048kB 0 / 0 00:01:34.087 00:01:34.087 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.087 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:34.087 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:34.088 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:34.088 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:34.088 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:34.088 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:34.088 + rm -f /tmp/spdk-ld-path 00:01:34.088 + source autorun-spdk.conf 00:01:34.088 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.088 ++ SPDK_TEST_NVMF=1 00:01:34.088 ++ SPDK_TEST_NVME_CLI=1 00:01:34.088 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.088 ++ SPDK_TEST_NVMF_NICS=e810 00:01:34.088 ++ SPDK_TEST_VFIOUSER=1 00:01:34.088 ++ SPDK_RUN_UBSAN=1 00:01:34.088 ++ NET_TYPE=phy 00:01:34.088 ++ RUN_NIGHTLY=0 00:01:34.088 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.088 + [[ -n '' ]] 00:01:34.088 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.088 + for M in /var/spdk/build-*-manifest.txt 00:01:34.088 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:34.088 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.088 + for M in /var/spdk/build-*-manifest.txt 00:01:34.088 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.088 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.088 + for M in /var/spdk/build-*-manifest.txt 00:01:34.088 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.088 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.088 ++ uname 00:01:34.088 + [[ Linux == \L\i\n\u\x ]] 00:01:34.088 + sudo dmesg -T 00:01:34.088 + sudo dmesg --clear 00:01:34.088 + dmesg_pid=2390735 00:01:34.088 + [[ Fedora Linux == FreeBSD ]] 00:01:34.088 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.088 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.088 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:34.088 + [[ -x /usr/src/fio-static/fio ]] 00:01:34.088 + export FIO_BIN=/usr/src/fio-static/fio 00:01:34.088 + FIO_BIN=/usr/src/fio-static/fio 00:01:34.088 + sudo dmesg -Tw 00:01:34.088 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:34.088 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:34.088 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:34.088 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.088 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.088 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:34.088 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.088 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.088 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.088 Test configuration: 00:01:34.088 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.088 SPDK_TEST_NVMF=1 00:01:34.088 SPDK_TEST_NVME_CLI=1 00:01:34.088 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.088 SPDK_TEST_NVMF_NICS=e810 00:01:34.088 SPDK_TEST_VFIOUSER=1 00:01:34.088 SPDK_RUN_UBSAN=1 00:01:34.088 NET_TYPE=phy 00:01:34.088 RUN_NIGHTLY=0 16:26:25 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:34.088 16:26:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:34.088 16:26:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:34.088 16:26:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:34.088 16:26:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:34.088 16:26:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:34.088 16:26:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.088 16:26:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.088 16:26:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.088 16:26:25 -- paths/export.sh@5 -- $ export PATH 00:01:34.088 16:26:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.088 16:26:25 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:34.088 16:26:25 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:34.088 16:26:25 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727792785.XXXXXX 00:01:34.088 16:26:25 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727792785.WEp8eC 00:01:34.088 16:26:25 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:34.088 16:26:25 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:34.088 16:26:25 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:34.088 16:26:25 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:34.088 16:26:25 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:34.088 16:26:25 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:34.088 16:26:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:34.088 16:26:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.088 16:26:25 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:34.088 16:26:25 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:34.088 16:26:25 -- pm/common@17 -- $ local monitor 00:01:34.088 16:26:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.088 16:26:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.088 16:26:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.088 16:26:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.088 16:26:25 -- pm/common@21 -- $ date +%s 00:01:34.088 16:26:25 -- pm/common@25 -- $ sleep 1 00:01:34.088 16:26:25 -- pm/common@21 -- $ date +%s 00:01:34.088 16:26:25 -- pm/common@21 -- $ date +%s 00:01:34.088 16:26:25 -- pm/common@21 -- $ date +%s 00:01:34.088 16:26:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727792785 00:01:34.088 16:26:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727792785 00:01:34.088 16:26:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727792785 00:01:34.088 16:26:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727792785 00:01:34.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727792785_collect-cpu-load.pm.log 00:01:34.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727792785_collect-cpu-temp.pm.log 00:01:34.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727792785_collect-vmstat.pm.log 00:01:34.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727792785_collect-bmc-pm.bmc.pm.log 00:01:35.292 16:26:26 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:35.292 16:26:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:35.292 16:26:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:35.292 16:26:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.292 16:26:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:35.292 Tue Oct 1 02:26:26 PM UTC 2024 00:01:35.292 16:26:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:35.292 v25.01-pre-25-g1c027d356 00:01:35.292 16:26:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:35.292 16:26:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:35.292 16:26:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:35.292 16:26:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:35.292 16:26:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:35.292 16:26:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.292 ************************************ 00:01:35.292 START TEST ubsan 00:01:35.292 ************************************ 00:01:35.292 16:26:26 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:35.292 using ubsan 00:01:35.292 00:01:35.292 real 0m0.001s 00:01:35.292 user 0m0.000s 00:01:35.292 sys 0m0.000s 00:01:35.292 16:26:26 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:35.292 16:26:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:35.292 ************************************ 00:01:35.292 END TEST ubsan 00:01:35.292 ************************************ 00:01:35.292 16:26:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:35.292 16:26:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:35.292 16:26:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:35.293 16:26:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:35.553 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:35.553 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.814 Using 'verbs' RDMA provider 00:01:48.988 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:03.893 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:03.893 Creating mk/config.mk...done. 00:02:03.893 Creating mk/cc.flags.mk...done. 00:02:03.894 Type 'make' to build. 00:02:03.894 16:26:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j128 00:02:03.894 16:26:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:03.894 16:26:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:03.894 16:26:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.894 ************************************ 00:02:03.894 START TEST make 00:02:03.894 ************************************ 00:02:03.894 16:26:54 make -- common/autotest_common.sh@1125 -- $ make -j128 00:02:03.894 make[1]: Nothing to be done for 'all'. 00:02:04.460 The Meson build system 00:02:04.460 Version: 1.5.0 00:02:04.460 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:04.460 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.460 Build type: native build 00:02:04.460 Project name: libvfio-user 00:02:04.460 Project version: 0.0.1 00:02:04.460 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.460 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.460 Host machine cpu family: x86_64 00:02:04.460 Host machine cpu: x86_64 00:02:04.460 Run-time dependency threads found: YES 00:02:04.460 Library dl found: YES 00:02:04.460 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.460 Run-time dependency json-c found: YES 0.17 00:02:04.460 Run-time dependency cmocka found: YES 1.1.7 00:02:04.460 Program pytest-3 found: NO 00:02:04.460 Program flake8 found: NO 00:02:04.460 Program misspell-fixer found: NO 00:02:04.460 Program restructuredtext-lint found: NO 00:02:04.460 Program valgrind found: YES (/usr/bin/valgrind) 00:02:04.460 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.460 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.460 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.460 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.460 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:04.460 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:04.460 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.460 Build targets in project: 8 00:02:04.460 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:04.460 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:04.460 00:02:04.460 libvfio-user 0.0.1 00:02:04.460 00:02:04.460 User defined options 00:02:04.460 buildtype : debug 00:02:04.460 default_library: shared 00:02:04.460 libdir : /usr/local/lib 00:02:04.460 00:02:04.460 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.025 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.025 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:05.025 [2/37] Compiling C object samples/null.p/null.c.o 00:02:05.025 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:05.026 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:05.026 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:05.026 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:05.026 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:05.026 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:05.026 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:05.026 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:05.026 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:05.026 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:05.026 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:05.026 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:05.026 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:05.026 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:05.026 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:05.026 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:05.026 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:05.026 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:05.026 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:05.026 [22/37] Compiling C object samples/server.p/server.c.o 00:02:05.026 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:05.026 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:05.026 [25/37] Compiling C object samples/client.p/client.c.o 00:02:05.026 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:05.026 [27/37] Linking target samples/client 00:02:05.285 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:05.285 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:05.285 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:05.285 [31/37] Linking target test/unit_tests 00:02:05.285 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:05.285 [33/37] Linking target samples/gpio-pci-idio-16 00:02:05.546 [34/37] Linking target samples/server 00:02:05.546 [35/37] Linking target samples/null 00:02:05.546 [36/37] Linking target samples/lspci 00:02:05.546 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:05.546 INFO: autodetecting backend as ninja 00:02:05.546 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.546 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.806 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.806 ninja: no work to do. 00:02:12.491 The Meson build system 00:02:12.491 Version: 1.5.0 00:02:12.491 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:12.491 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:12.491 Build type: native build 00:02:12.491 Program cat found: YES (/usr/bin/cat) 00:02:12.491 Project name: DPDK 00:02:12.491 Project version: 24.03.0 00:02:12.491 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.491 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.491 Host machine cpu family: x86_64 00:02:12.491 Host machine cpu: x86_64 00:02:12.491 Message: ## Building in Developer Mode ## 00:02:12.491 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.491 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.491 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.491 Program python3 found: YES (/usr/bin/python3) 00:02:12.491 Program cat found: YES (/usr/bin/cat) 00:02:12.491 Compiler for C supports arguments -march=native: YES 00:02:12.491 Checking for size of "void *" : 8 00:02:12.491 Checking for size of "void *" : 8 (cached) 00:02:12.491 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.491 Library m found: YES 00:02:12.491 Library numa found: YES 00:02:12.492 Has header "numaif.h" : YES 00:02:12.492 Library fdt found: NO 00:02:12.492 Library execinfo found: NO 00:02:12.492 Has header "execinfo.h" : YES 00:02:12.492 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.492 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.492 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.492 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.492 Run-time dependency openssl found: YES 3.1.1 00:02:12.492 Run-time dependency libpcap found: YES 1.10.4 00:02:12.492 Has header "pcap.h" with dependency libpcap: YES 00:02:12.492 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.492 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.492 Compiler for C supports arguments -Wformat: YES 00:02:12.492 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.492 Compiler for C supports arguments -Wformat-security: NO 00:02:12.492 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.492 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.492 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.492 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.492 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.492 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.492 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.492 Compiler for C supports arguments -Wundef: YES 00:02:12.492 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.492 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.492 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.492 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.492 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.492 Program objdump found: YES (/usr/bin/objdump) 00:02:12.492 Compiler for C supports arguments -mavx512f: YES 00:02:12.492 Checking if "AVX512 checking" compiles: YES 00:02:12.492 Fetching value of define "__SSE4_2__" : 1 00:02:12.492 Fetching value of define "__AES__" : 1 00:02:12.492 Fetching value of define "__AVX__" : 1 00:02:12.492 Fetching value of define "__AVX2__" : 1 00:02:12.492 Fetching value of define "__AVX512BW__" : 1 00:02:12.492 Fetching value of define "__AVX512CD__" : 1 00:02:12.492 Fetching value of define "__AVX512DQ__" : 1 00:02:12.492 Fetching value of define "__AVX512F__" : 1 00:02:12.492 Fetching value of define "__AVX512VL__" : 1 00:02:12.492 Fetching value of define "__PCLMUL__" : 1 00:02:12.492 Fetching value of define "__RDRND__" : 1 00:02:12.492 Fetching value of define "__RDSEED__" : 1 00:02:12.492 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:12.492 Fetching value of define "__znver1__" : (undefined) 00:02:12.492 Fetching value of define "__znver2__" : (undefined) 00:02:12.492 Fetching value of define "__znver3__" : (undefined) 00:02:12.492 Fetching value of define "__znver4__" : (undefined) 00:02:12.492 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.492 Message: lib/log: Defining dependency "log" 00:02:12.492 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.492 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.492 Checking for function "getentropy" : NO 00:02:12.492 Message: lib/eal: Defining dependency "eal" 00:02:12.492 Message: lib/ring: Defining dependency "ring" 00:02:12.492 Message: lib/rcu: Defining dependency "rcu" 00:02:12.492 Message: lib/mempool: Defining dependency "mempool" 00:02:12.492 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.492 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.492 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.492 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.492 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.492 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.492 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:12.492 Compiler for C supports arguments -mpclmul: YES 00:02:12.492 Compiler for C supports arguments -maes: YES 00:02:12.492 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.492 Compiler for C supports arguments -mavx512bw: YES 00:02:12.492 Compiler for C supports arguments -mavx512dq: YES 00:02:12.492 Compiler for C supports arguments -mavx512vl: YES 00:02:12.492 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.492 Compiler for C supports arguments -mavx2: YES 00:02:12.492 Compiler for C supports arguments -mavx: YES 00:02:12.492 Message: lib/net: Defining dependency "net" 00:02:12.492 Message: lib/meter: Defining dependency "meter" 00:02:12.492 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.492 Message: lib/pci: Defining dependency "pci" 00:02:12.492 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.492 Message: lib/hash: Defining dependency "hash" 00:02:12.492 Message: lib/timer: Defining dependency "timer" 00:02:12.492 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.492 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.492 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.492 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.492 Message: lib/power: Defining dependency "power" 00:02:12.492 Message: lib/reorder: Defining dependency "reorder" 00:02:12.492 Message: lib/security: Defining dependency "security" 00:02:12.492 Has header "linux/userfaultfd.h" : YES 00:02:12.492 Has header "linux/vduse.h" : YES 00:02:12.492 Message: lib/vhost: Defining dependency "vhost" 00:02:12.492 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.492 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.492 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.492 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.492 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.492 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.492 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.492 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.492 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.492 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.492 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.492 Configuring doxy-api-html.conf using configuration 00:02:12.492 Configuring doxy-api-man.conf using configuration 00:02:12.492 Program mandb found: YES (/usr/bin/mandb) 00:02:12.492 Program sphinx-build found: NO 00:02:12.492 Configuring rte_build_config.h using configuration 00:02:12.492 Message: 00:02:12.492 ================= 00:02:12.492 Applications Enabled 00:02:12.492 ================= 00:02:12.492 00:02:12.492 apps: 00:02:12.492 00:02:12.492 00:02:12.492 Message: 00:02:12.492 ================= 00:02:12.492 Libraries Enabled 00:02:12.492 ================= 00:02:12.492 00:02:12.492 libs: 00:02:12.492 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.492 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.492 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.492 00:02:12.492 Message: 00:02:12.492 =============== 00:02:12.492 Drivers Enabled 00:02:12.492 =============== 00:02:12.492 00:02:12.492 common: 00:02:12.492 00:02:12.492 bus: 00:02:12.492 pci, vdev, 00:02:12.492 mempool: 00:02:12.492 ring, 00:02:12.492 dma: 00:02:12.492 00:02:12.492 net: 00:02:12.492 00:02:12.492 crypto: 00:02:12.492 00:02:12.492 compress: 00:02:12.492 00:02:12.492 vdpa: 00:02:12.492 00:02:12.492 00:02:12.492 Message: 00:02:12.492 ================= 00:02:12.492 Content Skipped 00:02:12.492 ================= 00:02:12.492 00:02:12.492 apps: 00:02:12.492 dumpcap: explicitly disabled via build config 00:02:12.492 graph: explicitly disabled via build config 00:02:12.492 pdump: explicitly disabled via build config 00:02:12.492 proc-info: explicitly disabled via build config 00:02:12.492 test-acl: explicitly disabled via build config 00:02:12.492 test-bbdev: explicitly disabled via build config 00:02:12.492 test-cmdline: explicitly disabled via build config 00:02:12.492 test-compress-perf: explicitly disabled via build config 00:02:12.492 test-crypto-perf: explicitly disabled via build config 00:02:12.492 test-dma-perf: explicitly disabled via build config 00:02:12.492 test-eventdev: explicitly disabled via build config 00:02:12.492 test-fib: explicitly disabled via build config 00:02:12.492 test-flow-perf: explicitly disabled via build config 00:02:12.492 test-gpudev: explicitly disabled via build config 00:02:12.492 test-mldev: explicitly disabled via build config 00:02:12.492 test-pipeline: explicitly disabled via build config 00:02:12.492 test-pmd: explicitly disabled via build config 00:02:12.492 test-regex: explicitly disabled via build config 00:02:12.492 test-sad: explicitly disabled via build config 00:02:12.492 test-security-perf: explicitly disabled via build config 00:02:12.492 00:02:12.492 libs: 00:02:12.492 argparse: explicitly disabled via build config 00:02:12.492 metrics: explicitly disabled via build config 00:02:12.492 acl: explicitly disabled via build config 00:02:12.492 bbdev: explicitly disabled via build config 00:02:12.492 bitratestats: explicitly disabled via build config 00:02:12.492 bpf: explicitly disabled via build config 00:02:12.492 cfgfile: explicitly disabled via build config 00:02:12.492 distributor: explicitly disabled via build config 00:02:12.492 efd: explicitly disabled via build config 00:02:12.492 eventdev: explicitly disabled via build config 00:02:12.492 dispatcher: explicitly disabled via build config 00:02:12.492 gpudev: explicitly disabled via build config 00:02:12.492 gro: explicitly disabled via build config 00:02:12.492 gso: explicitly disabled via build config 00:02:12.492 ip_frag: explicitly disabled via build config 00:02:12.492 jobstats: explicitly disabled via build config 00:02:12.492 latencystats: explicitly disabled via build config 00:02:12.492 lpm: explicitly disabled via build config 00:02:12.492 member: explicitly disabled via build config 00:02:12.492 pcapng: explicitly disabled via build config 00:02:12.492 rawdev: explicitly disabled via build config 00:02:12.492 regexdev: explicitly disabled via build config 00:02:12.492 mldev: explicitly disabled via build config 00:02:12.492 rib: explicitly disabled via build config 00:02:12.493 sched: explicitly disabled via build config 00:02:12.493 stack: explicitly disabled via build config 00:02:12.493 ipsec: explicitly disabled via build config 00:02:12.493 pdcp: explicitly disabled via build config 00:02:12.493 fib: explicitly disabled via build config 00:02:12.493 port: explicitly disabled via build config 00:02:12.493 pdump: explicitly disabled via build config 00:02:12.493 table: explicitly disabled via build config 00:02:12.493 pipeline: explicitly disabled via build config 00:02:12.493 graph: explicitly disabled via build config 00:02:12.493 node: explicitly disabled via build config 00:02:12.493 00:02:12.493 drivers: 00:02:12.493 common/cpt: not in enabled drivers build config 00:02:12.493 common/dpaax: not in enabled drivers build config 00:02:12.493 common/iavf: not in enabled drivers build config 00:02:12.493 common/idpf: not in enabled drivers build config 00:02:12.493 common/ionic: not in enabled drivers build config 00:02:12.493 common/mvep: not in enabled drivers build config 00:02:12.493 common/octeontx: not in enabled drivers build config 00:02:12.493 bus/auxiliary: not in enabled drivers build config 00:02:12.493 bus/cdx: not in enabled drivers build config 00:02:12.493 bus/dpaa: not in enabled drivers build config 00:02:12.493 bus/fslmc: not in enabled drivers build config 00:02:12.493 bus/ifpga: not in enabled drivers build config 00:02:12.493 bus/platform: not in enabled drivers build config 00:02:12.493 bus/uacce: not in enabled drivers build config 00:02:12.493 bus/vmbus: not in enabled drivers build config 00:02:12.493 common/cnxk: not in enabled drivers build config 00:02:12.493 common/mlx5: not in enabled drivers build config 00:02:12.493 common/nfp: not in enabled drivers build config 00:02:12.493 common/nitrox: not in enabled drivers build config 00:02:12.493 common/qat: not in enabled drivers build config 00:02:12.493 common/sfc_efx: not in enabled drivers build config 00:02:12.493 mempool/bucket: not in enabled drivers build config 00:02:12.493 mempool/cnxk: not in enabled drivers build config 00:02:12.493 mempool/dpaa: not in enabled drivers build config 00:02:12.493 mempool/dpaa2: not in enabled drivers build config 00:02:12.493 mempool/octeontx: not in enabled drivers build config 00:02:12.493 mempool/stack: not in enabled drivers build config 00:02:12.493 dma/cnxk: not in enabled drivers build config 00:02:12.493 dma/dpaa: not in enabled drivers build config 00:02:12.493 dma/dpaa2: not in enabled drivers build config 00:02:12.493 dma/hisilicon: not in enabled drivers build config 00:02:12.493 dma/idxd: not in enabled drivers build config 00:02:12.493 dma/ioat: not in enabled drivers build config 00:02:12.493 dma/skeleton: not in enabled drivers build config 00:02:12.493 net/af_packet: not in enabled drivers build config 00:02:12.493 net/af_xdp: not in enabled drivers build config 00:02:12.493 net/ark: not in enabled drivers build config 00:02:12.493 net/atlantic: not in enabled drivers build config 00:02:12.493 net/avp: not in enabled drivers build config 00:02:12.493 net/axgbe: not in enabled drivers build config 00:02:12.493 net/bnx2x: not in enabled drivers build config 00:02:12.493 net/bnxt: not in enabled drivers build config 00:02:12.493 net/bonding: not in enabled drivers build config 00:02:12.493 net/cnxk: not in enabled drivers build config 00:02:12.493 net/cpfl: not in enabled drivers build config 00:02:12.493 net/cxgbe: not in enabled drivers build config 00:02:12.493 net/dpaa: not in enabled drivers build config 00:02:12.493 net/dpaa2: not in enabled drivers build config 00:02:12.493 net/e1000: not in enabled drivers build config 00:02:12.493 net/ena: not in enabled drivers build config 00:02:12.493 net/enetc: not in enabled drivers build config 00:02:12.493 net/enetfec: not in enabled drivers build config 00:02:12.493 net/enic: not in enabled drivers build config 00:02:12.493 net/failsafe: not in enabled drivers build config 00:02:12.493 net/fm10k: not in enabled drivers build config 00:02:12.493 net/gve: not in enabled drivers build config 00:02:12.493 net/hinic: not in enabled drivers build config 00:02:12.493 net/hns3: not in enabled drivers build config 00:02:12.493 net/i40e: not in enabled drivers build config 00:02:12.493 net/iavf: not in enabled drivers build config 00:02:12.493 net/ice: not in enabled drivers build config 00:02:12.493 net/idpf: not in enabled drivers build config 00:02:12.493 net/igc: not in enabled drivers build config 00:02:12.493 net/ionic: not in enabled drivers build config 00:02:12.493 net/ipn3ke: not in enabled drivers build config 00:02:12.493 net/ixgbe: not in enabled drivers build config 00:02:12.493 net/mana: not in enabled drivers build config 00:02:12.493 net/memif: not in enabled drivers build config 00:02:12.493 net/mlx4: not in enabled drivers build config 00:02:12.493 net/mlx5: not in enabled drivers build config 00:02:12.493 net/mvneta: not in enabled drivers build config 00:02:12.493 net/mvpp2: not in enabled drivers build config 00:02:12.493 net/netvsc: not in enabled drivers build config 00:02:12.493 net/nfb: not in enabled drivers build config 00:02:12.493 net/nfp: not in enabled drivers build config 00:02:12.493 net/ngbe: not in enabled drivers build config 00:02:12.493 net/null: not in enabled drivers build config 00:02:12.493 net/octeontx: not in enabled drivers build config 00:02:12.493 net/octeon_ep: not in enabled drivers build config 00:02:12.493 net/pcap: not in enabled drivers build config 00:02:12.493 net/pfe: not in enabled drivers build config 00:02:12.493 net/qede: not in enabled drivers build config 00:02:12.493 net/ring: not in enabled drivers build config 00:02:12.493 net/sfc: not in enabled drivers build config 00:02:12.493 net/softnic: not in enabled drivers build config 00:02:12.493 net/tap: not in enabled drivers build config 00:02:12.493 net/thunderx: not in enabled drivers build config 00:02:12.493 net/txgbe: not in enabled drivers build config 00:02:12.493 net/vdev_netvsc: not in enabled drivers build config 00:02:12.493 net/vhost: not in enabled drivers build config 00:02:12.493 net/virtio: not in enabled drivers build config 00:02:12.493 net/vmxnet3: not in enabled drivers build config 00:02:12.493 raw/*: missing internal dependency, "rawdev" 00:02:12.493 crypto/armv8: not in enabled drivers build config 00:02:12.493 crypto/bcmfs: not in enabled drivers build config 00:02:12.493 crypto/caam_jr: not in enabled drivers build config 00:02:12.493 crypto/ccp: not in enabled drivers build config 00:02:12.493 crypto/cnxk: not in enabled drivers build config 00:02:12.493 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.493 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.493 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.493 crypto/mlx5: not in enabled drivers build config 00:02:12.493 crypto/mvsam: not in enabled drivers build config 00:02:12.493 crypto/nitrox: not in enabled drivers build config 00:02:12.493 crypto/null: not in enabled drivers build config 00:02:12.493 crypto/octeontx: not in enabled drivers build config 00:02:12.493 crypto/openssl: not in enabled drivers build config 00:02:12.493 crypto/scheduler: not in enabled drivers build config 00:02:12.493 crypto/uadk: not in enabled drivers build config 00:02:12.493 crypto/virtio: not in enabled drivers build config 00:02:12.493 compress/isal: not in enabled drivers build config 00:02:12.493 compress/mlx5: not in enabled drivers build config 00:02:12.493 compress/nitrox: not in enabled drivers build config 00:02:12.493 compress/octeontx: not in enabled drivers build config 00:02:12.493 compress/zlib: not in enabled drivers build config 00:02:12.493 regex/*: missing internal dependency, "regexdev" 00:02:12.493 ml/*: missing internal dependency, "mldev" 00:02:12.493 vdpa/ifc: not in enabled drivers build config 00:02:12.493 vdpa/mlx5: not in enabled drivers build config 00:02:12.493 vdpa/nfp: not in enabled drivers build config 00:02:12.493 vdpa/sfc: not in enabled drivers build config 00:02:12.493 event/*: missing internal dependency, "eventdev" 00:02:12.493 baseband/*: missing internal dependency, "bbdev" 00:02:12.493 gpu/*: missing internal dependency, "gpudev" 00:02:12.493 00:02:12.493 00:02:12.493 Build targets in project: 84 00:02:12.493 00:02:12.493 DPDK 24.03.0 00:02:12.493 00:02:12.493 User defined options 00:02:12.493 buildtype : debug 00:02:12.493 default_library : shared 00:02:12.493 libdir : lib 00:02:12.493 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.493 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.493 c_link_args : 00:02:12.493 cpu_instruction_set: native 00:02:12.493 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:12.493 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:12.493 enable_docs : false 00:02:12.493 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.493 enable_kmods : false 00:02:12.493 max_lcores : 128 00:02:12.493 tests : false 00:02:12.493 00:02:12.493 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.493 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:12.493 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.493 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.493 [3/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.493 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.493 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.493 [6/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.493 [7/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.493 [8/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.493 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.493 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.493 [11/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.493 [12/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.493 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.493 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.493 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.493 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.493 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.493 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.493 [19/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.494 [20/267] Linking static target lib/librte_kvargs.a 00:02:12.494 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.494 [22/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.494 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.494 [24/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.494 [25/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.494 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.494 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.494 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.494 [29/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.494 [30/267] Linking static target lib/librte_log.a 00:02:12.494 [31/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.494 [32/267] Linking static target lib/librte_pci.a 00:02:12.494 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.752 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.752 [35/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.752 [36/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.752 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.752 [38/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:12.752 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.752 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.752 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.752 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.752 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.752 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.752 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.752 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.752 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.752 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.752 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.752 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.752 [51/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.752 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.752 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.752 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.752 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.752 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.752 [57/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.752 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.752 [59/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.752 [60/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.752 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.752 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.752 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.752 [64/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.752 [65/267] Linking static target lib/librte_meter.a 00:02:12.752 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.752 [67/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.752 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.752 [69/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.752 [70/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.752 [71/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.752 [72/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.752 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.032 [74/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.032 [75/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.032 [76/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.032 [77/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.032 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.032 [79/267] Linking static target lib/librte_timer.a 00:02:13.032 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.032 [81/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.032 [82/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.032 [83/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.032 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.032 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.032 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.032 [87/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.032 [88/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.032 [89/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.032 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.032 [91/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.032 [92/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.032 [93/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.032 [94/267] Linking static target lib/librte_compressdev.a 00:02:13.032 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.032 [96/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.032 [97/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.032 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.032 [99/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.032 [100/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.032 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.032 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.032 [103/267] Linking static target lib/librte_dmadev.a 00:02:13.032 [104/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.032 [105/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.032 [106/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.032 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.033 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.033 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.033 [110/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.033 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.033 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.033 [113/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.033 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.033 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.033 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.033 [117/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.033 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.033 [119/267] Linking static target lib/librte_cmdline.a 00:02:13.033 [120/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.033 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.033 [122/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.033 [123/267] Linking static target lib/librte_ring.a 00:02:13.033 [124/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.033 [125/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.033 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.033 [127/267] Linking static target lib/librte_telemetry.a 00:02:13.033 [128/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.033 [129/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.033 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.033 [131/267] Linking static target lib/librte_reorder.a 00:02:13.033 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.033 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.033 [134/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.033 [135/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.033 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.033 [137/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.033 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.033 [139/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.033 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.033 [141/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.033 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.033 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.033 [144/267] Linking static target lib/librte_mbuf.a 00:02:13.033 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.033 [146/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.033 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.033 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.033 [149/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.033 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.033 [151/267] Linking static target lib/librte_net.a 00:02:13.033 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.033 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.033 [154/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.322 [155/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.322 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.322 [157/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.322 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.322 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.322 [160/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.322 [161/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.322 [162/267] Linking static target lib/librte_power.a 00:02:13.322 [163/267] Linking static target lib/librte_eal.a 00:02:13.322 [164/267] Linking static target lib/librte_rcu.a 00:02:13.322 [165/267] Linking static target lib/librte_mempool.a 00:02:13.322 [166/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.322 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.322 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.322 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.323 [170/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.323 [171/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.323 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.323 [173/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.323 [174/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.323 [175/267] Linking static target lib/librte_security.a 00:02:13.323 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.323 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.323 [178/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.323 [179/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.323 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.323 [181/267] Linking static target lib/librte_cryptodev.a 00:02:13.323 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.323 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.323 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.323 [185/267] Linking target lib/librte_log.so.24.1 00:02:13.323 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.323 [187/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.323 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.323 [189/267] Linking static target lib/librte_hash.a 00:02:13.323 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.323 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.323 [192/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.323 [193/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.323 [194/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.633 [195/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.633 [196/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.633 [197/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.633 [198/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.633 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.633 [200/267] Linking static target drivers/librte_bus_vdev.a 00:02:13.633 [201/267] Linking target lib/librte_kvargs.so.24.1 00:02:13.633 [202/267] Linking static target drivers/librte_mempool_ring.a 00:02:13.633 [203/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [204/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.633 [205/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.633 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.633 [207/267] Linking static target drivers/librte_bus_pci.a 00:02:13.633 [208/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [210/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.633 [211/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [212/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.633 [214/267] Linking target lib/librte_telemetry.so.24.1 00:02:13.892 [215/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.892 [216/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.892 [217/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.892 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.892 [219/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.892 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.892 [221/267] Linking static target lib/librte_ethdev.a 00:02:14.152 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.152 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.411 [224/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.411 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.411 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.411 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:14.411 [228/267] Linking static target lib/librte_vhost.a 00:02:15.363 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.745 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.329 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.271 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.531 [233/267] Linking target lib/librte_eal.so.24.1 00:02:24.531 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.531 [235/267] Linking target lib/librte_ring.so.24.1 00:02:24.531 [236/267] Linking target lib/librte_pci.so.24.1 00:02:24.531 [237/267] Linking target lib/librte_meter.so.24.1 00:02:24.531 [238/267] Linking target lib/librte_timer.so.24.1 00:02:24.531 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:24.531 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.792 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.792 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.792 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.792 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.792 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.792 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.792 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:24.792 [248/267] Linking target lib/librte_rcu.so.24.1 00:02:25.053 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.053 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.053 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.053 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.053 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.313 [254/267] Linking target lib/librte_net.so.24.1 00:02:25.313 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.313 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:25.313 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:25.313 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.313 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.313 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:25.313 [261/267] Linking target lib/librte_hash.so.24.1 00:02:25.313 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:25.313 [263/267] Linking target lib/librte_security.so.24.1 00:02:25.573 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.573 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.573 [266/267] Linking target lib/librte_power.so.24.1 00:02:25.573 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:25.573 INFO: autodetecting backend as ninja 00:02:25.573 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:33.705 CC lib/log/log.o 00:02:33.705 CC lib/log/log_flags.o 00:02:33.705 CC lib/log/log_deprecated.o 00:02:33.705 CC lib/ut_mock/mock.o 00:02:33.705 CC lib/ut/ut.o 00:02:33.705 LIB libspdk_log.a 00:02:33.705 LIB libspdk_ut_mock.a 00:02:33.705 SO libspdk_ut_mock.so.6.0 00:02:33.705 SO libspdk_log.so.7.0 00:02:33.705 LIB libspdk_ut.a 00:02:33.705 SYMLINK libspdk_ut_mock.so 00:02:33.705 SO libspdk_ut.so.2.0 00:02:33.966 SYMLINK libspdk_log.so 00:02:33.966 SYMLINK libspdk_ut.so 00:02:34.226 CC lib/util/base64.o 00:02:34.226 CC lib/util/bit_array.o 00:02:34.226 CC lib/util/cpuset.o 00:02:34.226 CC lib/util/crc16.o 00:02:34.226 CC lib/util/crc32.o 00:02:34.226 CC lib/util/crc32c.o 00:02:34.226 CC lib/util/crc32_ieee.o 00:02:34.226 CC lib/util/dif.o 00:02:34.226 CC lib/util/crc64.o 00:02:34.226 CC lib/util/fd.o 00:02:34.226 CC lib/util/fd_group.o 00:02:34.226 CC lib/util/file.o 00:02:34.226 CC lib/util/math.o 00:02:34.226 CC lib/util/hexlify.o 00:02:34.226 CC lib/util/iov.o 00:02:34.226 CXX lib/trace_parser/trace.o 00:02:34.226 CC lib/util/net.o 00:02:34.226 CC lib/util/uuid.o 00:02:34.226 CC lib/util/pipe.o 00:02:34.226 CC lib/util/strerror_tls.o 00:02:34.226 CC lib/util/xor.o 00:02:34.226 CC lib/util/string.o 00:02:34.226 CC lib/ioat/ioat.o 00:02:34.226 CC lib/util/zipf.o 00:02:34.226 CC lib/util/md5.o 00:02:34.226 CC lib/dma/dma.o 00:02:34.226 CC lib/vfio_user/host/vfio_user.o 00:02:34.226 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.486 LIB libspdk_dma.a 00:02:34.486 SO libspdk_dma.so.5.0 00:02:34.486 LIB libspdk_ioat.a 00:02:34.486 SYMLINK libspdk_dma.so 00:02:34.486 SO libspdk_ioat.so.7.0 00:02:34.486 SYMLINK libspdk_ioat.so 00:02:34.486 LIB libspdk_util.a 00:02:34.486 LIB libspdk_vfio_user.a 00:02:34.486 SO libspdk_vfio_user.so.5.0 00:02:34.746 SO libspdk_util.so.10.0 00:02:34.746 SYMLINK libspdk_vfio_user.so 00:02:34.746 SYMLINK libspdk_util.so 00:02:35.007 LIB libspdk_trace_parser.a 00:02:35.007 SO libspdk_trace_parser.so.6.0 00:02:35.007 CC lib/idxd/idxd.o 00:02:35.007 CC lib/idxd/idxd_user.o 00:02:35.007 CC lib/idxd/idxd_kernel.o 00:02:35.007 SYMLINK libspdk_trace_parser.so 00:02:35.007 CC lib/env_dpdk/env.o 00:02:35.007 CC lib/env_dpdk/memory.o 00:02:35.007 CC lib/env_dpdk/pci.o 00:02:35.007 CC lib/env_dpdk/init.o 00:02:35.007 CC lib/rdma_provider/common.o 00:02:35.007 CC lib/env_dpdk/threads.o 00:02:35.007 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.007 CC lib/env_dpdk/pci_ioat.o 00:02:35.007 CC lib/env_dpdk/pci_virtio.o 00:02:35.007 CC lib/env_dpdk/pci_vmd.o 00:02:35.007 CC lib/env_dpdk/pci_idxd.o 00:02:35.007 CC lib/env_dpdk/pci_event.o 00:02:35.007 CC lib/conf/conf.o 00:02:35.007 CC lib/env_dpdk/sigbus_handler.o 00:02:35.007 CC lib/env_dpdk/pci_dpdk.o 00:02:35.007 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.007 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.007 CC lib/json/json_write.o 00:02:35.007 CC lib/json/json_parse.o 00:02:35.007 CC lib/json/json_util.o 00:02:35.007 CC lib/rdma_utils/rdma_utils.o 00:02:35.007 CC lib/vmd/vmd.o 00:02:35.007 CC lib/vmd/led.o 00:02:35.267 LIB libspdk_rdma_provider.a 00:02:35.267 LIB libspdk_rdma_utils.a 00:02:35.267 SO libspdk_rdma_provider.so.6.0 00:02:35.267 SO libspdk_rdma_utils.so.1.0 00:02:35.267 LIB libspdk_conf.a 00:02:35.267 SO libspdk_conf.so.6.0 00:02:35.267 SYMLINK libspdk_rdma_provider.so 00:02:35.267 LIB libspdk_json.a 00:02:35.527 SYMLINK libspdk_rdma_utils.so 00:02:35.527 SYMLINK libspdk_conf.so 00:02:35.527 SO libspdk_json.so.6.0 00:02:35.527 SYMLINK libspdk_json.so 00:02:35.527 LIB libspdk_idxd.a 00:02:35.527 SO libspdk_idxd.so.12.1 00:02:35.788 LIB libspdk_vmd.a 00:02:35.788 SYMLINK libspdk_idxd.so 00:02:35.788 SO libspdk_vmd.so.6.0 00:02:35.788 SYMLINK libspdk_vmd.so 00:02:35.788 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.788 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.788 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.788 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.049 LIB libspdk_jsonrpc.a 00:02:36.049 SO libspdk_jsonrpc.so.6.0 00:02:36.310 SYMLINK libspdk_jsonrpc.so 00:02:36.310 LIB libspdk_env_dpdk.a 00:02:36.310 SO libspdk_env_dpdk.so.15.0 00:02:36.310 SYMLINK libspdk_env_dpdk.so 00:02:36.570 CC lib/rpc/rpc.o 00:02:36.831 LIB libspdk_rpc.a 00:02:36.831 SO libspdk_rpc.so.6.0 00:02:36.831 SYMLINK libspdk_rpc.so 00:02:37.091 CC lib/trace/trace.o 00:02:37.091 CC lib/trace/trace_flags.o 00:02:37.091 CC lib/trace/trace_rpc.o 00:02:37.091 CC lib/notify/notify.o 00:02:37.091 CC lib/notify/notify_rpc.o 00:02:37.353 CC lib/keyring/keyring.o 00:02:37.353 CC lib/keyring/keyring_rpc.o 00:02:37.353 LIB libspdk_notify.a 00:02:37.353 LIB libspdk_trace.a 00:02:37.353 SO libspdk_notify.so.6.0 00:02:37.353 LIB libspdk_keyring.a 00:02:37.353 SO libspdk_trace.so.11.0 00:02:37.614 SO libspdk_keyring.so.2.0 00:02:37.614 SYMLINK libspdk_notify.so 00:02:37.614 SYMLINK libspdk_trace.so 00:02:37.614 SYMLINK libspdk_keyring.so 00:02:37.874 CC lib/sock/sock.o 00:02:37.874 CC lib/sock/sock_rpc.o 00:02:37.874 CC lib/thread/thread.o 00:02:37.874 CC lib/thread/iobuf.o 00:02:38.134 LIB libspdk_sock.a 00:02:38.393 SO libspdk_sock.so.10.0 00:02:38.393 SYMLINK libspdk_sock.so 00:02:38.653 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.653 CC lib/nvme/nvme_ns_cmd.o 00:02:38.653 CC lib/nvme/nvme_ctrlr.o 00:02:38.653 CC lib/nvme/nvme_fabric.o 00:02:38.653 CC lib/nvme/nvme_ns.o 00:02:38.653 CC lib/nvme/nvme_pcie_common.o 00:02:38.653 CC lib/nvme/nvme_pcie.o 00:02:38.653 CC lib/nvme/nvme_qpair.o 00:02:38.653 CC lib/nvme/nvme.o 00:02:38.653 CC lib/nvme/nvme_quirks.o 00:02:38.653 CC lib/nvme/nvme_transport.o 00:02:38.653 CC lib/nvme/nvme_discovery.o 00:02:38.653 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.653 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.653 CC lib/nvme/nvme_tcp.o 00:02:38.653 CC lib/nvme/nvme_opal.o 00:02:38.653 CC lib/nvme/nvme_io_msg.o 00:02:38.653 CC lib/nvme/nvme_poll_group.o 00:02:38.653 CC lib/nvme/nvme_zns.o 00:02:38.653 CC lib/nvme/nvme_stubs.o 00:02:38.653 CC lib/nvme/nvme_auth.o 00:02:38.653 CC lib/nvme/nvme_cuse.o 00:02:38.653 CC lib/nvme/nvme_vfio_user.o 00:02:38.653 CC lib/nvme/nvme_rdma.o 00:02:39.222 LIB libspdk_thread.a 00:02:39.222 SO libspdk_thread.so.10.1 00:02:39.222 SYMLINK libspdk_thread.so 00:02:39.481 CC lib/accel/accel.o 00:02:39.481 CC lib/accel/accel_sw.o 00:02:39.481 CC lib/accel/accel_rpc.o 00:02:39.481 CC lib/init/json_config.o 00:02:39.482 CC lib/init/subsystem.o 00:02:39.482 CC lib/vfu_tgt/tgt_endpoint.o 00:02:39.482 CC lib/vfu_tgt/tgt_rpc.o 00:02:39.482 CC lib/init/subsystem_rpc.o 00:02:39.482 CC lib/init/rpc.o 00:02:39.482 CC lib/virtio/virtio.o 00:02:39.482 CC lib/virtio/virtio_vhost_user.o 00:02:39.482 CC lib/virtio/virtio_vfio_user.o 00:02:39.482 CC lib/virtio/virtio_pci.o 00:02:39.482 CC lib/fsdev/fsdev.o 00:02:39.482 CC lib/blob/blobstore.o 00:02:39.482 CC lib/fsdev/fsdev_rpc.o 00:02:39.482 CC lib/blob/request.o 00:02:39.482 CC lib/fsdev/fsdev_io.o 00:02:39.482 CC lib/blob/zeroes.o 00:02:39.482 CC lib/blob/blob_bs_dev.o 00:02:39.742 LIB libspdk_init.a 00:02:39.742 SO libspdk_init.so.6.0 00:02:39.742 LIB libspdk_vfu_tgt.a 00:02:40.004 LIB libspdk_virtio.a 00:02:40.004 SYMLINK libspdk_init.so 00:02:40.004 SO libspdk_vfu_tgt.so.3.0 00:02:40.004 SO libspdk_virtio.so.7.0 00:02:40.004 SYMLINK libspdk_vfu_tgt.so 00:02:40.004 SYMLINK libspdk_virtio.so 00:02:40.004 LIB libspdk_fsdev.a 00:02:40.004 SO libspdk_fsdev.so.1.0 00:02:40.265 SYMLINK libspdk_fsdev.so 00:02:40.265 CC lib/event/app.o 00:02:40.265 CC lib/event/reactor.o 00:02:40.265 CC lib/event/log_rpc.o 00:02:40.265 CC lib/event/app_rpc.o 00:02:40.265 CC lib/event/scheduler_static.o 00:02:40.526 LIB libspdk_nvme.a 00:02:40.526 LIB libspdk_accel.a 00:02:40.527 SO libspdk_accel.so.16.0 00:02:40.527 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:40.527 SO libspdk_nvme.so.14.0 00:02:40.527 SYMLINK libspdk_accel.so 00:02:40.527 LIB libspdk_event.a 00:02:40.527 SO libspdk_event.so.14.0 00:02:40.788 SYMLINK libspdk_event.so 00:02:40.788 SYMLINK libspdk_nvme.so 00:02:40.788 CC lib/bdev/bdev.o 00:02:40.788 CC lib/bdev/bdev_rpc.o 00:02:40.788 CC lib/bdev/bdev_zone.o 00:02:40.788 CC lib/bdev/part.o 00:02:40.788 CC lib/bdev/scsi_nvme.o 00:02:41.049 LIB libspdk_fuse_dispatcher.a 00:02:41.049 SO libspdk_fuse_dispatcher.so.1.0 00:02:41.372 SYMLINK libspdk_fuse_dispatcher.so 00:02:41.943 LIB libspdk_blob.a 00:02:41.943 SO libspdk_blob.so.11.0 00:02:42.202 SYMLINK libspdk_blob.so 00:02:42.462 CC lib/blobfs/blobfs.o 00:02:42.462 CC lib/lvol/lvol.o 00:02:42.462 CC lib/blobfs/tree.o 00:02:43.032 LIB libspdk_bdev.a 00:02:43.032 SO libspdk_bdev.so.16.0 00:02:43.293 SYMLINK libspdk_bdev.so 00:02:43.293 LIB libspdk_blobfs.a 00:02:43.293 SO libspdk_blobfs.so.10.0 00:02:43.293 LIB libspdk_lvol.a 00:02:43.293 SO libspdk_lvol.so.10.0 00:02:43.293 SYMLINK libspdk_blobfs.so 00:02:43.293 SYMLINK libspdk_lvol.so 00:02:43.555 CC lib/ublk/ublk.o 00:02:43.555 CC lib/ublk/ublk_rpc.o 00:02:43.555 CC lib/nbd/nbd.o 00:02:43.555 CC lib/nbd/nbd_rpc.o 00:02:43.555 CC lib/nvmf/ctrlr.o 00:02:43.555 CC lib/nvmf/ctrlr_discovery.o 00:02:43.555 CC lib/nvmf/ctrlr_bdev.o 00:02:43.555 CC lib/nvmf/subsystem.o 00:02:43.555 CC lib/nvmf/nvmf.o 00:02:43.555 CC lib/nvmf/nvmf_rpc.o 00:02:43.555 CC lib/ftl/ftl_core.o 00:02:43.555 CC lib/nvmf/transport.o 00:02:43.555 CC lib/ftl/ftl_init.o 00:02:43.555 CC lib/ftl/ftl_layout.o 00:02:43.555 CC lib/nvmf/tcp.o 00:02:43.555 CC lib/nvmf/stubs.o 00:02:43.555 CC lib/ftl/ftl_debug.o 00:02:43.555 CC lib/nvmf/mdns_server.o 00:02:43.555 CC lib/ftl/ftl_io.o 00:02:43.555 CC lib/scsi/dev.o 00:02:43.555 CC lib/nvmf/vfio_user.o 00:02:43.555 CC lib/ftl/ftl_sb.o 00:02:43.555 CC lib/nvmf/rdma.o 00:02:43.555 CC lib/scsi/lun.o 00:02:43.555 CC lib/nvmf/auth.o 00:02:43.555 CC lib/ftl/ftl_l2p.o 00:02:43.555 CC lib/scsi/port.o 00:02:43.555 CC lib/scsi/scsi_bdev.o 00:02:43.555 CC lib/ftl/ftl_l2p_flat.o 00:02:43.555 CC lib/scsi/scsi.o 00:02:43.555 CC lib/ftl/ftl_nv_cache.o 00:02:43.555 CC lib/ftl/ftl_band.o 00:02:43.555 CC lib/scsi/scsi_pr.o 00:02:43.555 CC lib/ftl/ftl_band_ops.o 00:02:43.555 CC lib/ftl/ftl_writer.o 00:02:43.555 CC lib/scsi/scsi_rpc.o 00:02:43.555 CC lib/ftl/ftl_rq.o 00:02:43.555 CC lib/ftl/ftl_reloc.o 00:02:43.555 CC lib/scsi/task.o 00:02:43.555 CC lib/ftl/ftl_l2p_cache.o 00:02:43.555 CC lib/ftl/ftl_p2l.o 00:02:43.555 CC lib/ftl/ftl_p2l_log.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:43.555 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.555 CC lib/ftl/utils/ftl_conf.o 00:02:43.555 CC lib/ftl/utils/ftl_md.o 00:02:43.555 CC lib/ftl/utils/ftl_bitmap.o 00:02:43.555 CC lib/ftl/utils/ftl_mempool.o 00:02:43.555 CC lib/ftl/utils/ftl_property.o 00:02:43.555 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:43.555 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:43.555 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.555 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.555 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.555 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.555 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.555 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.555 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.555 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.555 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.555 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:43.555 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:43.555 CC lib/ftl/base/ftl_base_dev.o 00:02:43.555 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.555 CC lib/ftl/ftl_trace.o 00:02:44.123 LIB libspdk_nbd.a 00:02:44.123 SO libspdk_nbd.so.7.0 00:02:44.123 LIB libspdk_scsi.a 00:02:44.123 SYMLINK libspdk_nbd.so 00:02:44.123 SO libspdk_scsi.so.9.0 00:02:44.123 SYMLINK libspdk_scsi.so 00:02:44.123 LIB libspdk_ublk.a 00:02:44.383 SO libspdk_ublk.so.3.0 00:02:44.383 SYMLINK libspdk_ublk.so 00:02:44.383 LIB libspdk_ftl.a 00:02:44.383 CC lib/vhost/vhost.o 00:02:44.383 CC lib/vhost/vhost_scsi.o 00:02:44.383 CC lib/vhost/vhost_rpc.o 00:02:44.383 CC lib/vhost/rte_vhost_user.o 00:02:44.383 CC lib/vhost/vhost_blk.o 00:02:44.383 CC lib/iscsi/conn.o 00:02:44.383 CC lib/iscsi/iscsi.o 00:02:44.383 CC lib/iscsi/init_grp.o 00:02:44.383 CC lib/iscsi/param.o 00:02:44.383 CC lib/iscsi/portal_grp.o 00:02:44.383 CC lib/iscsi/tgt_node.o 00:02:44.383 CC lib/iscsi/iscsi_subsystem.o 00:02:44.383 CC lib/iscsi/iscsi_rpc.o 00:02:44.383 CC lib/iscsi/task.o 00:02:44.643 SO libspdk_ftl.so.9.0 00:02:44.902 SYMLINK libspdk_ftl.so 00:02:45.473 LIB libspdk_vhost.a 00:02:45.473 SO libspdk_vhost.so.8.0 00:02:45.473 LIB libspdk_nvmf.a 00:02:45.473 SYMLINK libspdk_vhost.so 00:02:45.473 SO libspdk_nvmf.so.19.0 00:02:45.734 LIB libspdk_iscsi.a 00:02:45.734 SO libspdk_iscsi.so.8.0 00:02:45.734 SYMLINK libspdk_nvmf.so 00:02:45.734 SYMLINK libspdk_iscsi.so 00:02:46.304 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.304 CC module/vfu_device/vfu_virtio.o 00:02:46.304 CC module/vfu_device/vfu_virtio_blk.o 00:02:46.304 CC module/vfu_device/vfu_virtio_scsi.o 00:02:46.304 CC module/vfu_device/vfu_virtio_rpc.o 00:02:46.304 CC module/vfu_device/vfu_virtio_fs.o 00:02:46.564 LIB libspdk_env_dpdk_rpc.a 00:02:46.564 CC module/keyring/linux/keyring.o 00:02:46.564 CC module/sock/posix/posix.o 00:02:46.564 CC module/keyring/linux/keyring_rpc.o 00:02:46.564 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.564 CC module/accel/dsa/accel_dsa.o 00:02:46.564 CC module/accel/iaa/accel_iaa_rpc.o 00:02:46.564 CC module/accel/iaa/accel_iaa.o 00:02:46.564 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.564 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.564 CC module/fsdev/aio/fsdev_aio.o 00:02:46.564 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:46.564 CC module/accel/ioat/accel_ioat.o 00:02:46.564 CC module/accel/error/accel_error_rpc.o 00:02:46.564 CC module/fsdev/aio/linux_aio_mgr.o 00:02:46.564 CC module/accel/error/accel_error.o 00:02:46.564 CC module/blob/bdev/blob_bdev.o 00:02:46.564 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.564 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:46.564 CC module/keyring/file/keyring_rpc.o 00:02:46.564 CC module/keyring/file/keyring.o 00:02:46.564 SO libspdk_env_dpdk_rpc.so.6.0 00:02:46.564 SYMLINK libspdk_env_dpdk_rpc.so 00:02:46.824 LIB libspdk_accel_iaa.a 00:02:46.824 LIB libspdk_keyring_linux.a 00:02:46.824 LIB libspdk_scheduler_gscheduler.a 00:02:46.824 SO libspdk_keyring_linux.so.1.0 00:02:46.824 SO libspdk_accel_iaa.so.3.0 00:02:46.824 LIB libspdk_accel_ioat.a 00:02:46.824 LIB libspdk_keyring_file.a 00:02:46.824 LIB libspdk_scheduler_dpdk_governor.a 00:02:46.824 LIB libspdk_accel_error.a 00:02:46.824 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.824 LIB libspdk_scheduler_dynamic.a 00:02:46.824 SO libspdk_accel_error.so.2.0 00:02:46.824 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:46.824 SO libspdk_keyring_file.so.2.0 00:02:46.824 SO libspdk_accel_ioat.so.6.0 00:02:46.824 SO libspdk_scheduler_dynamic.so.4.0 00:02:46.824 LIB libspdk_accel_dsa.a 00:02:46.824 SYMLINK libspdk_keyring_linux.so 00:02:46.824 SYMLINK libspdk_accel_iaa.so 00:02:46.824 LIB libspdk_blob_bdev.a 00:02:46.824 SYMLINK libspdk_scheduler_gscheduler.so 00:02:46.824 SYMLINK libspdk_accel_error.so 00:02:46.824 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:46.824 SO libspdk_blob_bdev.so.11.0 00:02:46.824 SO libspdk_accel_dsa.so.5.0 00:02:46.824 SYMLINK libspdk_keyring_file.so 00:02:46.824 SYMLINK libspdk_accel_ioat.so 00:02:46.824 SYMLINK libspdk_scheduler_dynamic.so 00:02:46.824 SYMLINK libspdk_blob_bdev.so 00:02:46.824 SYMLINK libspdk_accel_dsa.so 00:02:46.824 LIB libspdk_vfu_device.a 00:02:47.085 SO libspdk_vfu_device.so.3.0 00:02:47.085 SYMLINK libspdk_vfu_device.so 00:02:47.085 LIB libspdk_fsdev_aio.a 00:02:47.085 SO libspdk_fsdev_aio.so.1.0 00:02:47.085 LIB libspdk_sock_posix.a 00:02:47.085 SO libspdk_sock_posix.so.6.0 00:02:47.085 SYMLINK libspdk_fsdev_aio.so 00:02:47.344 SYMLINK libspdk_sock_posix.so 00:02:47.344 CC module/bdev/nvme/bdev_nvme.o 00:02:47.344 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:47.344 CC module/bdev/nvme/nvme_rpc.o 00:02:47.344 CC module/bdev/nvme/bdev_mdns_client.o 00:02:47.344 CC module/bdev/nvme/vbdev_opal.o 00:02:47.344 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:47.344 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:47.344 CC module/bdev/lvol/vbdev_lvol.o 00:02:47.344 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:47.344 CC module/bdev/gpt/vbdev_gpt.o 00:02:47.344 CC module/bdev/gpt/gpt.o 00:02:47.345 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:47.345 CC module/bdev/delay/vbdev_delay.o 00:02:47.345 CC module/bdev/error/vbdev_error_rpc.o 00:02:47.345 CC module/bdev/error/vbdev_error.o 00:02:47.345 CC module/blobfs/bdev/blobfs_bdev.o 00:02:47.345 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:47.345 CC module/bdev/raid/bdev_raid.o 00:02:47.345 CC module/bdev/raid/bdev_raid_rpc.o 00:02:47.345 CC module/bdev/raid/bdev_raid_sb.o 00:02:47.345 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:47.345 CC module/bdev/null/bdev_null.o 00:02:47.345 CC module/bdev/raid/raid0.o 00:02:47.345 CC module/bdev/raid/raid1.o 00:02:47.345 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:47.345 CC module/bdev/null/bdev_null_rpc.o 00:02:47.345 CC module/bdev/malloc/bdev_malloc.o 00:02:47.345 CC module/bdev/raid/concat.o 00:02:47.345 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:47.345 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:47.345 CC module/bdev/split/vbdev_split.o 00:02:47.345 CC module/bdev/passthru/vbdev_passthru.o 00:02:47.345 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:47.345 CC module/bdev/split/vbdev_split_rpc.o 00:02:47.345 CC module/bdev/aio/bdev_aio.o 00:02:47.345 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:47.345 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:47.345 CC module/bdev/aio/bdev_aio_rpc.o 00:02:47.345 CC module/bdev/ftl/bdev_ftl.o 00:02:47.345 CC module/bdev/iscsi/bdev_iscsi.o 00:02:47.345 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:47.345 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:47.603 LIB libspdk_blobfs_bdev.a 00:02:47.603 LIB libspdk_bdev_gpt.a 00:02:47.603 LIB libspdk_bdev_split.a 00:02:47.603 SO libspdk_blobfs_bdev.so.6.0 00:02:47.603 LIB libspdk_bdev_error.a 00:02:47.603 SO libspdk_bdev_gpt.so.6.0 00:02:47.863 LIB libspdk_bdev_null.a 00:02:47.863 SO libspdk_bdev_split.so.6.0 00:02:47.863 SO libspdk_bdev_error.so.6.0 00:02:47.863 SYMLINK libspdk_blobfs_bdev.so 00:02:47.863 SO libspdk_bdev_null.so.6.0 00:02:47.863 SYMLINK libspdk_bdev_gpt.so 00:02:47.863 LIB libspdk_bdev_zone_block.a 00:02:47.863 LIB libspdk_bdev_ftl.a 00:02:47.863 SYMLINK libspdk_bdev_null.so 00:02:47.863 LIB libspdk_bdev_passthru.a 00:02:47.863 LIB libspdk_bdev_malloc.a 00:02:47.863 LIB libspdk_bdev_iscsi.a 00:02:47.863 LIB libspdk_bdev_aio.a 00:02:47.863 SYMLINK libspdk_bdev_split.so 00:02:47.863 SYMLINK libspdk_bdev_error.so 00:02:47.863 LIB libspdk_bdev_delay.a 00:02:47.863 SO libspdk_bdev_zone_block.so.6.0 00:02:47.863 SO libspdk_bdev_iscsi.so.6.0 00:02:47.863 SO libspdk_bdev_ftl.so.6.0 00:02:47.863 SO libspdk_bdev_malloc.so.6.0 00:02:47.863 SO libspdk_bdev_passthru.so.6.0 00:02:47.863 SO libspdk_bdev_aio.so.6.0 00:02:47.863 SO libspdk_bdev_delay.so.6.0 00:02:47.863 LIB libspdk_bdev_lvol.a 00:02:47.863 SYMLINK libspdk_bdev_malloc.so 00:02:47.863 SYMLINK libspdk_bdev_zone_block.so 00:02:47.863 SYMLINK libspdk_bdev_iscsi.so 00:02:47.863 SYMLINK libspdk_bdev_ftl.so 00:02:47.863 SYMLINK libspdk_bdev_passthru.so 00:02:47.863 SYMLINK libspdk_bdev_aio.so 00:02:47.863 SYMLINK libspdk_bdev_delay.so 00:02:47.863 SO libspdk_bdev_lvol.so.6.0 00:02:47.863 LIB libspdk_bdev_virtio.a 00:02:47.863 SYMLINK libspdk_bdev_lvol.so 00:02:48.122 SO libspdk_bdev_virtio.so.6.0 00:02:48.122 SYMLINK libspdk_bdev_virtio.so 00:02:48.380 LIB libspdk_bdev_raid.a 00:02:48.380 SO libspdk_bdev_raid.so.6.0 00:02:48.380 SYMLINK libspdk_bdev_raid.so 00:02:49.317 LIB libspdk_bdev_nvme.a 00:02:49.317 SO libspdk_bdev_nvme.so.7.0 00:02:49.317 SYMLINK libspdk_bdev_nvme.so 00:02:49.889 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:49.889 CC module/event/subsystems/fsdev/fsdev.o 00:02:49.889 CC module/event/subsystems/iobuf/iobuf.o 00:02:49.889 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:49.889 CC module/event/subsystems/vmd/vmd.o 00:02:49.889 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:49.889 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:49.889 CC module/event/subsystems/sock/sock.o 00:02:49.889 CC module/event/subsystems/keyring/keyring.o 00:02:50.149 CC module/event/subsystems/scheduler/scheduler.o 00:02:50.149 LIB libspdk_event_fsdev.a 00:02:50.149 LIB libspdk_event_vhost_blk.a 00:02:50.149 LIB libspdk_event_vfu_tgt.a 00:02:50.149 LIB libspdk_event_iobuf.a 00:02:50.149 SO libspdk_event_fsdev.so.1.0 00:02:50.149 LIB libspdk_event_vmd.a 00:02:50.149 LIB libspdk_event_keyring.a 00:02:50.149 LIB libspdk_event_sock.a 00:02:50.149 LIB libspdk_event_scheduler.a 00:02:50.149 SO libspdk_event_vfu_tgt.so.3.0 00:02:50.149 SO libspdk_event_vhost_blk.so.3.0 00:02:50.149 SO libspdk_event_iobuf.so.3.0 00:02:50.149 SO libspdk_event_keyring.so.1.0 00:02:50.149 SO libspdk_event_vmd.so.6.0 00:02:50.149 SO libspdk_event_sock.so.5.0 00:02:50.149 SO libspdk_event_scheduler.so.4.0 00:02:50.149 SYMLINK libspdk_event_fsdev.so 00:02:50.149 SYMLINK libspdk_event_vfu_tgt.so 00:02:50.413 SYMLINK libspdk_event_vhost_blk.so 00:02:50.413 SYMLINK libspdk_event_iobuf.so 00:02:50.413 SYMLINK libspdk_event_keyring.so 00:02:50.413 SYMLINK libspdk_event_sock.so 00:02:50.413 SYMLINK libspdk_event_vmd.so 00:02:50.413 SYMLINK libspdk_event_scheduler.so 00:02:50.699 CC module/event/subsystems/accel/accel.o 00:02:50.699 LIB libspdk_event_accel.a 00:02:50.699 SO libspdk_event_accel.so.6.0 00:02:50.988 SYMLINK libspdk_event_accel.so 00:02:51.247 CC module/event/subsystems/bdev/bdev.o 00:02:51.247 LIB libspdk_event_bdev.a 00:02:51.508 SO libspdk_event_bdev.so.6.0 00:02:51.508 SYMLINK libspdk_event_bdev.so 00:02:51.769 CC module/event/subsystems/ublk/ublk.o 00:02:51.769 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.769 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.769 CC module/event/subsystems/scsi/scsi.o 00:02:51.769 CC module/event/subsystems/nbd/nbd.o 00:02:52.028 LIB libspdk_event_ublk.a 00:02:52.028 SO libspdk_event_ublk.so.3.0 00:02:52.028 LIB libspdk_event_nbd.a 00:02:52.028 LIB libspdk_event_scsi.a 00:02:52.028 SO libspdk_event_nbd.so.6.0 00:02:52.028 SO libspdk_event_scsi.so.6.0 00:02:52.028 SYMLINK libspdk_event_ublk.so 00:02:52.028 LIB libspdk_event_nvmf.a 00:02:52.028 SO libspdk_event_nvmf.so.6.0 00:02:52.028 SYMLINK libspdk_event_nbd.so 00:02:52.028 SYMLINK libspdk_event_scsi.so 00:02:52.288 SYMLINK libspdk_event_nvmf.so 00:02:52.583 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.583 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.583 LIB libspdk_event_vhost_scsi.a 00:02:52.583 LIB libspdk_event_iscsi.a 00:02:52.583 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.583 SO libspdk_event_iscsi.so.6.0 00:02:52.843 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.843 SYMLINK libspdk_event_iscsi.so 00:02:52.843 SO libspdk.so.6.0 00:02:52.843 SYMLINK libspdk.so 00:02:53.416 CC app/spdk_nvme_perf/perf.o 00:02:53.416 CC test/rpc_client/rpc_client_test.o 00:02:53.416 CC app/trace_record/trace_record.o 00:02:53.416 CC app/spdk_lspci/spdk_lspci.o 00:02:53.416 CC app/spdk_nvme_identify/identify.o 00:02:53.416 CXX app/trace/trace.o 00:02:53.416 CC app/spdk_top/spdk_top.o 00:02:53.416 TEST_HEADER include/spdk/accel.h 00:02:53.416 TEST_HEADER include/spdk/accel_module.h 00:02:53.416 TEST_HEADER include/spdk/assert.h 00:02:53.416 TEST_HEADER include/spdk/barrier.h 00:02:53.416 TEST_HEADER include/spdk/bdev.h 00:02:53.416 TEST_HEADER include/spdk/base64.h 00:02:53.416 TEST_HEADER include/spdk/bit_pool.h 00:02:53.416 TEST_HEADER include/spdk/bdev_zone.h 00:02:53.416 CC app/spdk_nvme_discover/discovery_aer.o 00:02:53.416 TEST_HEADER include/spdk/bdev_module.h 00:02:53.416 TEST_HEADER include/spdk/bit_array.h 00:02:53.416 TEST_HEADER include/spdk/blob_bdev.h 00:02:53.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:53.416 TEST_HEADER include/spdk/blob.h 00:02:53.416 TEST_HEADER include/spdk/blobfs.h 00:02:53.416 TEST_HEADER include/spdk/config.h 00:02:53.416 TEST_HEADER include/spdk/conf.h 00:02:53.416 TEST_HEADER include/spdk/cpuset.h 00:02:53.416 TEST_HEADER include/spdk/crc16.h 00:02:53.416 TEST_HEADER include/spdk/crc32.h 00:02:53.416 TEST_HEADER include/spdk/crc64.h 00:02:53.416 TEST_HEADER include/spdk/dif.h 00:02:53.416 TEST_HEADER include/spdk/dma.h 00:02:53.416 TEST_HEADER include/spdk/endian.h 00:02:53.416 TEST_HEADER include/spdk/env.h 00:02:53.416 CC app/spdk_tgt/spdk_tgt.o 00:02:53.416 TEST_HEADER include/spdk/env_dpdk.h 00:02:53.416 TEST_HEADER include/spdk/event.h 00:02:53.416 TEST_HEADER include/spdk/fd_group.h 00:02:53.416 TEST_HEADER include/spdk/fsdev.h 00:02:53.416 TEST_HEADER include/spdk/fd.h 00:02:53.416 TEST_HEADER include/spdk/file.h 00:02:53.416 TEST_HEADER include/spdk/fsdev_module.h 00:02:53.416 TEST_HEADER include/spdk/ftl.h 00:02:53.416 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:53.416 TEST_HEADER include/spdk/histogram_data.h 00:02:53.416 TEST_HEADER include/spdk/hexlify.h 00:02:53.416 TEST_HEADER include/spdk/idxd.h 00:02:53.416 TEST_HEADER include/spdk/init.h 00:02:53.416 CC app/spdk_dd/spdk_dd.o 00:02:53.416 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.416 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.416 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.416 TEST_HEADER include/spdk/ioat.h 00:02:53.416 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.416 TEST_HEADER include/spdk/json.h 00:02:53.416 CC app/iscsi_tgt/iscsi_tgt.o 00:02:53.416 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.416 CC app/nvmf_tgt/nvmf_main.o 00:02:53.416 TEST_HEADER include/spdk/keyring.h 00:02:53.416 TEST_HEADER include/spdk/log.h 00:02:53.416 TEST_HEADER include/spdk/keyring_module.h 00:02:53.416 TEST_HEADER include/spdk/likely.h 00:02:53.416 TEST_HEADER include/spdk/lvol.h 00:02:53.416 TEST_HEADER include/spdk/memory.h 00:02:53.416 TEST_HEADER include/spdk/md5.h 00:02:53.416 TEST_HEADER include/spdk/mmio.h 00:02:53.416 TEST_HEADER include/spdk/net.h 00:02:53.416 TEST_HEADER include/spdk/nbd.h 00:02:53.416 TEST_HEADER include/spdk/notify.h 00:02:53.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.416 TEST_HEADER include/spdk/nvme.h 00:02:53.416 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.416 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.416 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.416 TEST_HEADER include/spdk/nvmf.h 00:02:53.416 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.416 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.416 TEST_HEADER include/spdk/opal.h 00:02:53.416 TEST_HEADER include/spdk/opal_spec.h 00:02:53.416 TEST_HEADER include/spdk/pipe.h 00:02:53.416 TEST_HEADER include/spdk/queue.h 00:02:53.416 TEST_HEADER include/spdk/pci_ids.h 00:02:53.416 TEST_HEADER include/spdk/reduce.h 00:02:53.416 TEST_HEADER include/spdk/rpc.h 00:02:53.416 TEST_HEADER include/spdk/scheduler.h 00:02:53.416 TEST_HEADER include/spdk/scsi.h 00:02:53.416 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.416 TEST_HEADER include/spdk/sock.h 00:02:53.416 TEST_HEADER include/spdk/stdinc.h 00:02:53.416 TEST_HEADER include/spdk/string.h 00:02:53.416 TEST_HEADER include/spdk/thread.h 00:02:53.416 TEST_HEADER include/spdk/trace.h 00:02:53.416 TEST_HEADER include/spdk/trace_parser.h 00:02:53.416 TEST_HEADER include/spdk/tree.h 00:02:53.416 TEST_HEADER include/spdk/ublk.h 00:02:53.416 TEST_HEADER include/spdk/util.h 00:02:53.416 TEST_HEADER include/spdk/uuid.h 00:02:53.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.416 TEST_HEADER include/spdk/version.h 00:02:53.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.416 TEST_HEADER include/spdk/vhost.h 00:02:53.416 TEST_HEADER include/spdk/vmd.h 00:02:53.416 TEST_HEADER include/spdk/xor.h 00:02:53.416 TEST_HEADER include/spdk/zipf.h 00:02:53.416 CXX test/cpp_headers/accel.o 00:02:53.416 CXX test/cpp_headers/accel_module.o 00:02:53.416 CXX test/cpp_headers/barrier.o 00:02:53.416 CXX test/cpp_headers/assert.o 00:02:53.416 CXX test/cpp_headers/bdev.o 00:02:53.416 CXX test/cpp_headers/base64.o 00:02:53.416 CXX test/cpp_headers/bdev_module.o 00:02:53.416 CXX test/cpp_headers/bdev_zone.o 00:02:53.416 CXX test/cpp_headers/bit_array.o 00:02:53.416 CXX test/cpp_headers/blob_bdev.o 00:02:53.417 CXX test/cpp_headers/blobfs_bdev.o 00:02:53.417 CXX test/cpp_headers/bit_pool.o 00:02:53.417 CXX test/cpp_headers/blob.o 00:02:53.417 CXX test/cpp_headers/blobfs.o 00:02:53.417 CXX test/cpp_headers/conf.o 00:02:53.417 CXX test/cpp_headers/cpuset.o 00:02:53.417 CXX test/cpp_headers/crc16.o 00:02:53.417 CXX test/cpp_headers/dif.o 00:02:53.417 CXX test/cpp_headers/crc64.o 00:02:53.417 CXX test/cpp_headers/config.o 00:02:53.417 CXX test/cpp_headers/env_dpdk.o 00:02:53.417 CXX test/cpp_headers/endian.o 00:02:53.417 CXX test/cpp_headers/crc32.o 00:02:53.417 CXX test/cpp_headers/env.o 00:02:53.417 CXX test/cpp_headers/fd.o 00:02:53.417 CXX test/cpp_headers/dma.o 00:02:53.417 CXX test/cpp_headers/event.o 00:02:53.417 CXX test/cpp_headers/fd_group.o 00:02:53.417 CXX test/cpp_headers/fsdev_module.o 00:02:53.417 CXX test/cpp_headers/ftl.o 00:02:53.417 CXX test/cpp_headers/file.o 00:02:53.417 CXX test/cpp_headers/fsdev.o 00:02:53.417 CXX test/cpp_headers/fuse_dispatcher.o 00:02:53.417 CXX test/cpp_headers/gpt_spec.o 00:02:53.417 CXX test/cpp_headers/histogram_data.o 00:02:53.417 CXX test/cpp_headers/hexlify.o 00:02:53.417 CXX test/cpp_headers/idxd_spec.o 00:02:53.417 CXX test/cpp_headers/idxd.o 00:02:53.417 CXX test/cpp_headers/ioat.o 00:02:53.417 CXX test/cpp_headers/init.o 00:02:53.417 CXX test/cpp_headers/json.o 00:02:53.417 CXX test/cpp_headers/ioat_spec.o 00:02:53.417 CXX test/cpp_headers/iscsi_spec.o 00:02:53.417 CXX test/cpp_headers/jsonrpc.o 00:02:53.417 CXX test/cpp_headers/likely.o 00:02:53.417 CXX test/cpp_headers/keyring_module.o 00:02:53.417 CXX test/cpp_headers/log.o 00:02:53.417 CXX test/cpp_headers/keyring.o 00:02:53.417 CXX test/cpp_headers/md5.o 00:02:53.417 CXX test/cpp_headers/memory.o 00:02:53.417 CXX test/cpp_headers/mmio.o 00:02:53.417 CXX test/cpp_headers/net.o 00:02:53.417 CXX test/cpp_headers/lvol.o 00:02:53.417 CC test/env/pci/pci_ut.o 00:02:53.417 CXX test/cpp_headers/nvme_intel.o 00:02:53.417 CXX test/cpp_headers/nbd.o 00:02:53.417 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:53.417 CXX test/cpp_headers/nvme_spec.o 00:02:53.417 CXX test/cpp_headers/nvme.o 00:02:53.417 CXX test/cpp_headers/notify.o 00:02:53.676 CXX test/cpp_headers/nvme_ocssd.o 00:02:53.676 CC test/env/memory/memory_ut.o 00:02:53.676 CXX test/cpp_headers/nvmf_cmd.o 00:02:53.676 CXX test/cpp_headers/nvme_zns.o 00:02:53.676 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:53.676 CXX test/cpp_headers/nvmf.o 00:02:53.676 CXX test/cpp_headers/opal.o 00:02:53.676 CXX test/cpp_headers/nvmf_spec.o 00:02:53.676 CXX test/cpp_headers/pci_ids.o 00:02:53.676 CXX test/cpp_headers/nvmf_transport.o 00:02:53.676 CXX test/cpp_headers/pipe.o 00:02:53.676 LINK spdk_lspci 00:02:53.676 CXX test/cpp_headers/queue.o 00:02:53.676 CXX test/cpp_headers/reduce.o 00:02:53.676 CXX test/cpp_headers/opal_spec.o 00:02:53.676 CXX test/cpp_headers/rpc.o 00:02:53.676 CXX test/cpp_headers/scheduler.o 00:02:53.676 LINK rpc_client_test 00:02:53.676 CXX test/cpp_headers/scsi.o 00:02:53.676 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:53.676 CXX test/cpp_headers/scsi_spec.o 00:02:53.676 CXX test/cpp_headers/sock.o 00:02:53.676 CXX test/cpp_headers/stdinc.o 00:02:53.676 CXX test/cpp_headers/thread.o 00:02:53.676 CXX test/cpp_headers/trace_parser.o 00:02:53.676 CXX test/cpp_headers/string.o 00:02:53.676 CC test/env/vtophys/vtophys.o 00:02:53.677 CXX test/cpp_headers/trace.o 00:02:53.677 CXX test/cpp_headers/tree.o 00:02:53.677 CC test/app/stub/stub.o 00:02:53.677 CC examples/util/zipf/zipf.o 00:02:53.677 CXX test/cpp_headers/uuid.o 00:02:53.677 CXX test/cpp_headers/ublk.o 00:02:53.677 CXX test/cpp_headers/version.o 00:02:53.677 CXX test/cpp_headers/util.o 00:02:53.677 CXX test/cpp_headers/vfio_user_pci.o 00:02:53.677 CXX test/cpp_headers/vhost.o 00:02:53.677 CXX test/cpp_headers/vmd.o 00:02:53.677 CXX test/cpp_headers/xor.o 00:02:53.677 CXX test/cpp_headers/vfio_user_spec.o 00:02:53.677 CXX test/cpp_headers/zipf.o 00:02:53.677 CC test/app/jsoncat/jsoncat.o 00:02:53.677 CC test/app/histogram_perf/histogram_perf.o 00:02:53.677 CC examples/ioat/verify/verify.o 00:02:53.677 CC test/app/bdev_svc/bdev_svc.o 00:02:53.677 CC test/thread/poller_perf/poller_perf.o 00:02:53.677 CC app/fio/nvme/fio_plugin.o 00:02:53.677 CC test/dma/test_dma/test_dma.o 00:02:53.677 CC examples/ioat/perf/perf.o 00:02:53.677 LINK spdk_nvme_discover 00:02:53.677 CC app/fio/bdev/fio_plugin.o 00:02:53.677 LINK iscsi_tgt 00:02:53.941 LINK nvmf_tgt 00:02:53.941 LINK spdk_tgt 00:02:53.941 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.941 LINK spdk_trace_record 00:02:53.941 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:53.941 LINK interrupt_tgt 00:02:53.941 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.941 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:54.200 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:54.200 LINK vtophys 00:02:54.200 LINK bdev_svc 00:02:54.200 LINK ioat_perf 00:02:54.200 LINK env_dpdk_post_init 00:02:54.200 LINK zipf 00:02:54.200 LINK poller_perf 00:02:54.459 LINK jsoncat 00:02:54.459 LINK histogram_perf 00:02:54.459 LINK spdk_dd 00:02:54.459 LINK nvme_fuzz 00:02:54.459 LINK stub 00:02:54.459 LINK verify 00:02:54.459 LINK spdk_trace 00:02:54.459 LINK vhost_fuzz 00:02:54.459 LINK spdk_nvme 00:02:54.720 LINK pci_ut 00:02:54.720 LINK spdk_nvme_perf 00:02:54.720 LINK test_dma 00:02:54.720 CC examples/sock/hello_world/hello_sock.o 00:02:54.720 CC test/event/event_perf/event_perf.o 00:02:54.720 CC test/event/reactor_perf/reactor_perf.o 00:02:54.720 CC test/event/reactor/reactor.o 00:02:54.720 CC app/vhost/vhost.o 00:02:54.720 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.720 CC examples/idxd/perf/perf.o 00:02:54.720 LINK spdk_bdev 00:02:54.720 CC examples/vmd/led/led.o 00:02:54.720 CC test/event/scheduler/scheduler.o 00:02:54.720 CC test/event/app_repeat/app_repeat.o 00:02:54.720 CC examples/thread/thread/thread_ex.o 00:02:54.720 LINK spdk_nvme_identify 00:02:54.720 LINK spdk_top 00:02:54.720 LINK mem_callbacks 00:02:54.979 LINK event_perf 00:02:54.979 LINK reactor_perf 00:02:54.979 LINK lsvmd 00:02:54.979 LINK reactor 00:02:54.979 LINK hello_sock 00:02:54.979 LINK vhost 00:02:54.979 LINK led 00:02:54.979 LINK memory_ut 00:02:54.979 LINK app_repeat 00:02:54.979 LINK scheduler 00:02:54.979 LINK idxd_perf 00:02:54.979 LINK thread 00:02:55.237 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.237 CC test/nvme/startup/startup.o 00:02:55.237 CC test/nvme/aer/aer.o 00:02:55.237 CC test/nvme/boot_partition/boot_partition.o 00:02:55.237 CC test/nvme/reset/reset.o 00:02:55.237 CC test/nvme/reserve/reserve.o 00:02:55.237 CC test/nvme/sgl/sgl.o 00:02:55.237 CC test/nvme/simple_copy/simple_copy.o 00:02:55.238 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.238 CC test/nvme/cuse/cuse.o 00:02:55.238 CC test/blobfs/mkfs/mkfs.o 00:02:55.238 CC test/nvme/overhead/overhead.o 00:02:55.238 CC test/nvme/e2edp/nvme_dp.o 00:02:55.238 CC test/nvme/compliance/nvme_compliance.o 00:02:55.238 CC test/nvme/fdp/fdp.o 00:02:55.238 CC test/nvme/err_injection/err_injection.o 00:02:55.238 CC test/nvme/connect_stress/connect_stress.o 00:02:55.238 CC test/accel/dif/dif.o 00:02:55.497 CC test/lvol/esnap/esnap.o 00:02:55.497 LINK boot_partition 00:02:55.497 LINK startup 00:02:55.497 CC examples/nvme/reconnect/reconnect.o 00:02:55.497 LINK reset 00:02:55.497 CC examples/nvme/abort/abort.o 00:02:55.497 CC examples/nvme/hello_world/hello_world.o 00:02:55.497 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:55.497 CC examples/nvme/arbitration/arbitration.o 00:02:55.497 LINK doorbell_aers 00:02:55.497 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:55.497 LINK fused_ordering 00:02:55.497 CC examples/nvme/hotplug/hotplug.o 00:02:55.497 LINK iscsi_fuzz 00:02:55.497 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:55.497 LINK reserve 00:02:55.497 LINK connect_stress 00:02:55.497 LINK mkfs 00:02:55.497 LINK simple_copy 00:02:55.497 LINK err_injection 00:02:55.497 LINK sgl 00:02:55.497 LINK aer 00:02:55.497 LINK nvme_dp 00:02:55.497 LINK overhead 00:02:55.497 LINK nvme_compliance 00:02:55.497 CC examples/accel/perf/accel_perf.o 00:02:55.497 LINK fdp 00:02:55.497 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:55.497 CC examples/blob/cli/blobcli.o 00:02:55.757 CC examples/blob/hello_world/hello_blob.o 00:02:55.757 LINK cmb_copy 00:02:55.757 LINK hello_world 00:02:55.757 LINK pmr_persistence 00:02:55.757 LINK hotplug 00:02:55.757 LINK reconnect 00:02:55.757 LINK abort 00:02:55.757 LINK arbitration 00:02:55.757 LINK hello_blob 00:02:55.757 LINK dif 00:02:55.757 LINK nvme_manage 00:02:55.757 LINK hello_fsdev 00:02:56.018 LINK accel_perf 00:02:56.018 LINK blobcli 00:02:56.280 LINK cuse 00:02:56.541 CC test/bdev/bdevio/bdevio.o 00:02:56.541 CC examples/bdev/hello_world/hello_bdev.o 00:02:56.541 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.802 LINK hello_bdev 00:02:56.802 LINK bdevio 00:02:57.063 LINK bdevperf 00:02:57.636 CC examples/nvmf/nvmf/nvmf.o 00:02:58.209 LINK nvmf 00:02:59.594 LINK esnap 00:02:59.855 00:02:59.855 real 0m56.833s 00:02:59.855 user 7m38.471s 00:02:59.855 sys 4m18.072s 00:02:59.856 16:27:51 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.856 16:27:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:59.856 ************************************ 00:02:59.856 END TEST make 00:02:59.856 ************************************ 00:02:59.856 16:27:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:59.856 16:27:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:59.856 16:27:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:59.856 16:27:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.856 16:27:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:59.856 16:27:51 -- pm/common@44 -- $ pid=2390766 00:02:59.856 16:27:51 -- pm/common@50 -- $ kill -TERM 2390766 00:02:59.856 16:27:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.856 16:27:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:59.856 16:27:51 -- pm/common@44 -- $ pid=2390767 00:02:59.856 16:27:51 -- pm/common@50 -- $ kill -TERM 2390767 00:02:59.856 16:27:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.856 16:27:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:59.856 16:27:51 -- pm/common@44 -- $ pid=2390769 00:02:59.856 16:27:51 -- pm/common@50 -- $ kill -TERM 2390769 00:02:59.856 16:27:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.856 16:27:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:59.856 16:27:51 -- pm/common@44 -- $ pid=2390793 00:02:59.856 16:27:51 -- pm/common@50 -- $ sudo -E kill -TERM 2390793 00:02:59.856 16:27:51 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:59.856 16:27:51 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:59.856 16:27:51 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:00.117 16:27:51 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:00.117 16:27:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.117 16:27:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.117 16:27:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.117 16:27:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.117 16:27:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.117 16:27:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.117 16:27:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.117 16:27:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.117 16:27:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.117 16:27:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.117 16:27:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.117 16:27:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:00.117 16:27:51 -- scripts/common.sh@345 -- # : 1 00:03:00.117 16:27:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.117 16:27:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.117 16:27:51 -- scripts/common.sh@365 -- # decimal 1 00:03:00.117 16:27:51 -- scripts/common.sh@353 -- # local d=1 00:03:00.117 16:27:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.117 16:27:51 -- scripts/common.sh@355 -- # echo 1 00:03:00.117 16:27:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.117 16:27:51 -- scripts/common.sh@366 -- # decimal 2 00:03:00.117 16:27:51 -- scripts/common.sh@353 -- # local d=2 00:03:00.117 16:27:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.117 16:27:51 -- scripts/common.sh@355 -- # echo 2 00:03:00.117 16:27:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.117 16:27:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.117 16:27:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.117 16:27:51 -- scripts/common.sh@368 -- # return 0 00:03:00.117 16:27:51 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.117 16:27:51 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.117 --rc genhtml_branch_coverage=1 00:03:00.117 --rc genhtml_function_coverage=1 00:03:00.117 --rc genhtml_legend=1 00:03:00.117 --rc geninfo_all_blocks=1 00:03:00.117 --rc geninfo_unexecuted_blocks=1 00:03:00.117 00:03:00.117 ' 00:03:00.117 16:27:51 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.117 --rc genhtml_branch_coverage=1 00:03:00.117 --rc genhtml_function_coverage=1 00:03:00.117 --rc genhtml_legend=1 00:03:00.117 --rc geninfo_all_blocks=1 00:03:00.117 --rc geninfo_unexecuted_blocks=1 00:03:00.117 00:03:00.117 ' 00:03:00.117 16:27:51 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.117 --rc genhtml_branch_coverage=1 00:03:00.117 --rc genhtml_function_coverage=1 00:03:00.117 --rc genhtml_legend=1 00:03:00.117 --rc geninfo_all_blocks=1 00:03:00.117 --rc geninfo_unexecuted_blocks=1 00:03:00.117 00:03:00.117 ' 00:03:00.117 16:27:51 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.117 --rc genhtml_branch_coverage=1 00:03:00.117 --rc genhtml_function_coverage=1 00:03:00.117 --rc genhtml_legend=1 00:03:00.117 --rc geninfo_all_blocks=1 00:03:00.117 --rc geninfo_unexecuted_blocks=1 00:03:00.117 00:03:00.117 ' 00:03:00.117 16:27:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:00.117 16:27:51 -- nvmf/common.sh@7 -- # uname -s 00:03:00.117 16:27:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.117 16:27:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.117 16:27:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.117 16:27:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.117 16:27:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.117 16:27:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.117 16:27:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.117 16:27:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.117 16:27:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.117 16:27:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.117 16:27:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:03:00.117 16:27:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:03:00.117 16:27:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.117 16:27:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.117 16:27:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:00.117 16:27:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.117 16:27:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:00.117 16:27:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:00.117 16:27:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.117 16:27:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.117 16:27:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.117 16:27:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.118 16:27:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.118 16:27:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.118 16:27:51 -- paths/export.sh@5 -- # export PATH 00:03:00.118 16:27:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.118 16:27:51 -- nvmf/common.sh@51 -- # : 0 00:03:00.118 16:27:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:00.118 16:27:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:00.118 16:27:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.118 16:27:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.118 16:27:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.118 16:27:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:00.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:00.118 16:27:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:00.118 16:27:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:00.118 16:27:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:00.118 16:27:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.118 16:27:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.118 16:27:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.118 16:27:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.118 16:27:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.118 16:27:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.118 16:27:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.118 16:27:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.118 16:27:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.118 16:27:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.118 16:27:51 -- spdk/autotest.sh@48 -- # udevadm_pid=2454370 00:03:00.118 16:27:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.118 16:27:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.118 16:27:51 -- pm/common@17 -- # local monitor 00:03:00.118 16:27:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.118 16:27:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.118 16:27:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.118 16:27:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.118 16:27:51 -- pm/common@21 -- # date +%s 00:03:00.118 16:27:51 -- pm/common@25 -- # sleep 1 00:03:00.118 16:27:51 -- pm/common@21 -- # date +%s 00:03:00.118 16:27:51 -- pm/common@21 -- # date +%s 00:03:00.118 16:27:51 -- pm/common@21 -- # date +%s 00:03:00.118 16:27:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727792871 00:03:00.118 16:27:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727792871 00:03:00.118 16:27:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727792871 00:03:00.118 16:27:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727792871 00:03:00.118 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727792871_collect-cpu-load.pm.log 00:03:00.118 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727792871_collect-vmstat.pm.log 00:03:00.118 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727792871_collect-cpu-temp.pm.log 00:03:00.118 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727792871_collect-bmc-pm.bmc.pm.log 00:03:01.059 16:27:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:01.059 16:27:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:01.059 16:27:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:01.059 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:03:01.059 16:27:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:01.059 16:27:52 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:01.059 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:03:01.059 16:27:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:01.059 16:27:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.059 16:27:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.059 16:27:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:01.059 16:27:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.059 16:27:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.059 16:27:52 -- common/autotest_common.sh@1455 -- # uname 00:03:01.059 16:27:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:01.059 16:27:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.059 16:27:52 -- common/autotest_common.sh@1475 -- # uname 00:03:01.059 16:27:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:01.059 16:27:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:01.059 16:27:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:01.319 lcov: LCOV version 1.15 00:03:01.319 16:27:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:23.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.888 16:28:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:29.888 16:28:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.888 16:28:21 -- common/autotest_common.sh@10 -- # set +x 00:03:29.888 16:28:21 -- spdk/autotest.sh@78 -- # rm -f 00:03:29.888 16:28:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.188 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:33.188 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:03:33.449 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:33.449 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:33.710 16:28:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:33.710 16:28:25 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:33.710 16:28:25 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:33.710 16:28:25 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:33.710 16:28:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:33.710 16:28:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:33.710 16:28:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:33.710 16:28:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.710 16:28:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:33.710 16:28:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:33.710 16:28:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.710 16:28:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.710 16:28:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:33.710 16:28:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:33.710 16:28:25 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.971 No valid GPT data, bailing 00:03:33.971 16:28:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.971 16:28:25 -- scripts/common.sh@394 -- # pt= 00:03:33.971 16:28:25 -- scripts/common.sh@395 -- # return 1 00:03:33.971 16:28:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.971 1+0 records in 00:03:33.971 1+0 records out 00:03:33.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00150525 s, 697 MB/s 00:03:33.971 16:28:25 -- spdk/autotest.sh@105 -- # sync 00:03:33.971 16:28:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.971 16:28:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.971 16:28:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:42.109 16:28:32 -- spdk/autotest.sh@111 -- # uname -s 00:03:42.109 16:28:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:42.109 16:28:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:42.109 16:28:32 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.405 Hugepages 00:03:45.405 node hugesize free / total 00:03:45.405 node0 1048576kB 0 / 0 00:03:45.405 node0 2048kB 0 / 0 00:03:45.405 node1 1048576kB 0 / 0 00:03:45.405 node1 2048kB 0 / 0 00:03:45.405 00:03:45.405 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.405 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:45.405 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:45.405 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:45.406 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:45.406 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:45.406 16:28:36 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.406 16:28:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.406 16:28:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.406 16:28:36 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.707 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.707 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.624 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.624 16:28:42 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:51.647 16:28:43 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:51.647 16:28:43 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:51.647 16:28:43 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.647 16:28:43 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:51.647 16:28:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.647 16:28:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.647 16:28:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.647 16:28:43 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.647 16:28:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.647 16:28:43 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.647 16:28:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:51.647 16:28:43 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.941 Waiting for block devices as requested 00:03:54.941 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:55.202 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:55.202 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:55.202 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:55.202 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:55.461 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:55.461 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:55.461 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:55.720 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:03:55.720 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:55.980 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:55.980 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:55.980 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:56.241 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:56.241 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:56.241 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:56.501 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:56.762 16:28:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:56.762 16:28:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:56.762 16:28:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:56.762 16:28:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:56.762 16:28:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:56.762 16:28:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:56.762 16:28:48 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:56.762 16:28:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:56.762 16:28:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:56.762 16:28:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:56.762 16:28:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:56.762 16:28:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:56.762 16:28:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:56.762 16:28:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:56.762 16:28:48 -- common/autotest_common.sh@1541 -- # continue 00:03:56.762 16:28:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:56.762 16:28:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.762 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 16:28:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:56.762 16:28:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.762 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.762 16:28:48 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.063 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:00.063 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:00.323 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:02.234 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.495 16:28:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:02.495 16:28:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.495 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:04:02.496 16:28:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:02.496 16:28:53 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:02.496 16:28:53 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.496 16:28:53 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:02.496 16:28:53 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:02.496 16:28:53 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:02.496 16:28:53 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:02.496 16:28:53 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:02.496 16:28:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:02.496 16:28:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:02.496 16:28:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.496 16:28:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.496 16:28:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:02.496 16:28:54 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:02.496 16:28:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:02.496 16:28:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:02.496 16:28:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:02.496 16:28:54 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:02.496 16:28:54 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:02.496 16:28:54 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:02.496 16:28:54 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:02.496 16:28:54 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:65:00.0 00:04:02.496 16:28:54 -- common/autotest_common.sh@1577 -- # [[ -z 0000:65:00.0 ]] 00:04:02.496 16:28:54 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2471318 00:04:02.496 16:28:54 -- common/autotest_common.sh@1583 -- # waitforlisten 2471318 00:04:02.496 16:28:54 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.496 16:28:54 -- common/autotest_common.sh@831 -- # '[' -z 2471318 ']' 00:04:02.496 16:28:54 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.496 16:28:54 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.496 16:28:54 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.496 16:28:54 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.496 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:04:02.496 [2024-10-01 16:28:54.172465] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:02.496 [2024-10-01 16:28:54.172530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471318 ] 00:04:02.756 [2024-10-01 16:28:54.254035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.756 [2024-10-01 16:28:54.339589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.829 16:28:55 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:03.829 16:28:55 -- common/autotest_common.sh@864 -- # return 0 00:04:03.829 16:28:55 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:03.829 16:28:55 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:03.829 16:28:55 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:65:00.0 00:04:07.125 nvme0n1 00:04:07.125 16:28:58 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:07.125 [2024-10-01 16:28:58.319287] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:07.125 request: 00:04:07.125 { 00:04:07.125 "nvme_ctrlr_name": "nvme0", 00:04:07.125 "password": "test", 00:04:07.125 "method": "bdev_nvme_opal_revert", 00:04:07.126 "req_id": 1 00:04:07.126 } 00:04:07.126 Got JSON-RPC error response 00:04:07.126 response: 00:04:07.126 { 00:04:07.126 "code": -32602, 00:04:07.126 "message": "Invalid parameters" 00:04:07.126 } 00:04:07.126 16:28:58 -- common/autotest_common.sh@1589 -- # true 00:04:07.126 16:28:58 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:07.126 16:28:58 -- common/autotest_common.sh@1593 -- # killprocess 2471318 00:04:07.126 16:28:58 -- common/autotest_common.sh@950 -- # '[' -z 2471318 ']' 00:04:07.126 16:28:58 -- common/autotest_common.sh@954 -- # kill -0 2471318 00:04:07.126 16:28:58 -- common/autotest_common.sh@955 -- # uname 00:04:07.126 16:28:58 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.126 16:28:58 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2471318 00:04:07.126 16:28:58 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.126 16:28:58 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.126 16:28:58 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2471318' 00:04:07.126 killing process with pid 2471318 00:04:07.126 16:28:58 -- common/autotest_common.sh@969 -- # kill 2471318 00:04:07.126 16:28:58 -- common/autotest_common.sh@974 -- # wait 2471318 00:04:09.663 16:29:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:09.663 16:29:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:09.663 16:29:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.663 16:29:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.663 16:29:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:09.663 16:29:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.663 16:29:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.663 16:29:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:09.663 16:29:00 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.663 16:29:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.663 16:29:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.663 16:29:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.663 ************************************ 00:04:09.663 START TEST env 00:04:09.663 ************************************ 00:04:09.663 16:29:00 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.663 * Looking for test storage... 00:04:09.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:09.663 16:29:00 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:09.663 16:29:00 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:09.663 16:29:00 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:09.663 16:29:01 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:09.663 16:29:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.663 16:29:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.663 16:29:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.663 16:29:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.663 16:29:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.663 16:29:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.663 16:29:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.663 16:29:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.663 16:29:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.663 16:29:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.663 16:29:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.663 16:29:01 env -- scripts/common.sh@344 -- # case "$op" in 00:04:09.663 16:29:01 env -- scripts/common.sh@345 -- # : 1 00:04:09.663 16:29:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.663 16:29:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.663 16:29:01 env -- scripts/common.sh@365 -- # decimal 1 00:04:09.663 16:29:01 env -- scripts/common.sh@353 -- # local d=1 00:04:09.663 16:29:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.663 16:29:01 env -- scripts/common.sh@355 -- # echo 1 00:04:09.663 16:29:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.663 16:29:01 env -- scripts/common.sh@366 -- # decimal 2 00:04:09.663 16:29:01 env -- scripts/common.sh@353 -- # local d=2 00:04:09.663 16:29:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.663 16:29:01 env -- scripts/common.sh@355 -- # echo 2 00:04:09.663 16:29:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.663 16:29:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.663 16:29:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.664 16:29:01 env -- scripts/common.sh@368 -- # return 0 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:09.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.664 --rc genhtml_branch_coverage=1 00:04:09.664 --rc genhtml_function_coverage=1 00:04:09.664 --rc genhtml_legend=1 00:04:09.664 --rc geninfo_all_blocks=1 00:04:09.664 --rc geninfo_unexecuted_blocks=1 00:04:09.664 00:04:09.664 ' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:09.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.664 --rc genhtml_branch_coverage=1 00:04:09.664 --rc genhtml_function_coverage=1 00:04:09.664 --rc genhtml_legend=1 00:04:09.664 --rc geninfo_all_blocks=1 00:04:09.664 --rc geninfo_unexecuted_blocks=1 00:04:09.664 00:04:09.664 ' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:09.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.664 --rc genhtml_branch_coverage=1 00:04:09.664 --rc genhtml_function_coverage=1 00:04:09.664 --rc genhtml_legend=1 00:04:09.664 --rc geninfo_all_blocks=1 00:04:09.664 --rc geninfo_unexecuted_blocks=1 00:04:09.664 00:04:09.664 ' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:09.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.664 --rc genhtml_branch_coverage=1 00:04:09.664 --rc genhtml_function_coverage=1 00:04:09.664 --rc genhtml_legend=1 00:04:09.664 --rc geninfo_all_blocks=1 00:04:09.664 --rc geninfo_unexecuted_blocks=1 00:04:09.664 00:04:09.664 ' 00:04:09.664 16:29:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.664 16:29:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.664 ************************************ 00:04:09.664 START TEST env_memory 00:04:09.664 ************************************ 00:04:09.664 16:29:01 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.664 00:04:09.664 00:04:09.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.664 http://cunit.sourceforge.net/ 00:04:09.664 00:04:09.664 00:04:09.664 Suite: memory 00:04:09.664 Test: alloc and free memory map ...[2024-10-01 16:29:01.116119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:09.664 passed 00:04:09.664 Test: mem map translation ...[2024-10-01 16:29:01.139630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:09.664 [2024-10-01 16:29:01.139653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:09.664 [2024-10-01 16:29:01.139697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:09.664 [2024-10-01 16:29:01.139704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:09.664 passed 00:04:09.664 Test: mem map registration ...[2024-10-01 16:29:01.190729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:09.664 [2024-10-01 16:29:01.190751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:09.664 passed 00:04:09.664 Test: mem map adjacent registrations ...passed 00:04:09.664 00:04:09.664 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.664 suites 1 1 n/a 0 0 00:04:09.664 tests 4 4 4 0 0 00:04:09.664 asserts 152 152 152 0 n/a 00:04:09.664 00:04:09.664 Elapsed time = 0.181 seconds 00:04:09.664 00:04:09.664 real 0m0.196s 00:04:09.664 user 0m0.184s 00:04:09.664 sys 0m0.011s 00:04:09.664 16:29:01 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.664 16:29:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:09.664 ************************************ 00:04:09.664 END TEST env_memory 00:04:09.664 ************************************ 00:04:09.664 16:29:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.664 16:29:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.664 16:29:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.664 ************************************ 00:04:09.664 START TEST env_vtophys 00:04:09.664 ************************************ 00:04:09.925 16:29:01 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.925 EAL: lib.eal log level changed from notice to debug 00:04:09.925 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.925 EAL: Detected lcore 1 as core 1 on socket 0 00:04:09.925 EAL: Detected lcore 2 as core 2 on socket 0 00:04:09.925 EAL: Detected lcore 3 as core 3 on socket 0 00:04:09.925 EAL: Detected lcore 4 as core 4 on socket 0 00:04:09.925 EAL: Detected lcore 5 as core 5 on socket 0 00:04:09.925 EAL: Detected lcore 6 as core 6 on socket 0 00:04:09.925 EAL: Detected lcore 7 as core 7 on socket 0 00:04:09.925 EAL: Detected lcore 8 as core 8 on socket 0 00:04:09.925 EAL: Detected lcore 9 as core 9 on socket 0 00:04:09.925 EAL: Detected lcore 10 as core 10 on socket 0 00:04:09.925 EAL: Detected lcore 11 as core 11 on socket 0 00:04:09.925 EAL: Detected lcore 12 as core 12 on socket 0 00:04:09.925 EAL: Detected lcore 13 as core 13 on socket 0 00:04:09.925 EAL: Detected lcore 14 as core 14 on socket 0 00:04:09.925 EAL: Detected lcore 15 as core 15 on socket 0 00:04:09.925 EAL: Detected lcore 16 as core 16 on socket 0 00:04:09.925 EAL: Detected lcore 17 as core 17 on socket 0 00:04:09.925 EAL: Detected lcore 18 as core 18 on socket 0 00:04:09.925 EAL: Detected lcore 19 as core 19 on socket 0 00:04:09.925 EAL: Detected lcore 20 as core 20 on socket 0 00:04:09.925 EAL: Detected lcore 21 as core 21 on socket 0 00:04:09.925 EAL: Detected lcore 22 as core 22 on socket 0 00:04:09.925 EAL: Detected lcore 23 as core 23 on socket 0 00:04:09.925 EAL: Detected lcore 24 as core 24 on socket 0 00:04:09.925 EAL: Detected lcore 25 as core 25 on socket 0 00:04:09.925 EAL: Detected lcore 26 as core 26 on socket 0 00:04:09.925 EAL: Detected lcore 27 as core 27 on socket 0 00:04:09.925 EAL: Detected lcore 28 as core 28 on socket 0 00:04:09.925 EAL: Detected lcore 29 as core 29 on socket 0 00:04:09.925 EAL: Detected lcore 30 as core 30 on socket 0 00:04:09.925 EAL: Detected lcore 31 as core 31 on socket 0 00:04:09.925 EAL: Detected lcore 32 as core 0 on socket 1 00:04:09.925 EAL: Detected lcore 33 as core 1 on socket 1 00:04:09.925 EAL: Detected lcore 34 as core 2 on socket 1 00:04:09.925 EAL: Detected lcore 35 as core 3 on socket 1 00:04:09.925 EAL: Detected lcore 36 as core 4 on socket 1 00:04:09.925 EAL: Detected lcore 37 as core 5 on socket 1 00:04:09.925 EAL: Detected lcore 38 as core 6 on socket 1 00:04:09.925 EAL: Detected lcore 39 as core 7 on socket 1 00:04:09.925 EAL: Detected lcore 40 as core 8 on socket 1 00:04:09.925 EAL: Detected lcore 41 as core 9 on socket 1 00:04:09.925 EAL: Detected lcore 42 as core 10 on socket 1 00:04:09.925 EAL: Detected lcore 43 as core 11 on socket 1 00:04:09.925 EAL: Detected lcore 44 as core 12 on socket 1 00:04:09.925 EAL: Detected lcore 45 as core 13 on socket 1 00:04:09.925 EAL: Detected lcore 46 as core 14 on socket 1 00:04:09.925 EAL: Detected lcore 47 as core 15 on socket 1 00:04:09.925 EAL: Detected lcore 48 as core 16 on socket 1 00:04:09.925 EAL: Detected lcore 49 as core 17 on socket 1 00:04:09.925 EAL: Detected lcore 50 as core 18 on socket 1 00:04:09.925 EAL: Detected lcore 51 as core 19 on socket 1 00:04:09.925 EAL: Detected lcore 52 as core 20 on socket 1 00:04:09.925 EAL: Detected lcore 53 as core 21 on socket 1 00:04:09.925 EAL: Detected lcore 54 as core 22 on socket 1 00:04:09.925 EAL: Detected lcore 55 as core 23 on socket 1 00:04:09.925 EAL: Detected lcore 56 as core 24 on socket 1 00:04:09.925 EAL: Detected lcore 57 as core 25 on socket 1 00:04:09.925 EAL: Detected lcore 58 as core 26 on socket 1 00:04:09.925 EAL: Detected lcore 59 as core 27 on socket 1 00:04:09.925 EAL: Detected lcore 60 as core 28 on socket 1 00:04:09.925 EAL: Detected lcore 61 as core 29 on socket 1 00:04:09.925 EAL: Detected lcore 62 as core 30 on socket 1 00:04:09.925 EAL: Detected lcore 63 as core 31 on socket 1 00:04:09.925 EAL: Detected lcore 64 as core 0 on socket 0 00:04:09.925 EAL: Detected lcore 65 as core 1 on socket 0 00:04:09.925 EAL: Detected lcore 66 as core 2 on socket 0 00:04:09.925 EAL: Detected lcore 67 as core 3 on socket 0 00:04:09.925 EAL: Detected lcore 68 as core 4 on socket 0 00:04:09.925 EAL: Detected lcore 69 as core 5 on socket 0 00:04:09.925 EAL: Detected lcore 70 as core 6 on socket 0 00:04:09.925 EAL: Detected lcore 71 as core 7 on socket 0 00:04:09.925 EAL: Detected lcore 72 as core 8 on socket 0 00:04:09.925 EAL: Detected lcore 73 as core 9 on socket 0 00:04:09.925 EAL: Detected lcore 74 as core 10 on socket 0 00:04:09.925 EAL: Detected lcore 75 as core 11 on socket 0 00:04:09.925 EAL: Detected lcore 76 as core 12 on socket 0 00:04:09.925 EAL: Detected lcore 77 as core 13 on socket 0 00:04:09.925 EAL: Detected lcore 78 as core 14 on socket 0 00:04:09.925 EAL: Detected lcore 79 as core 15 on socket 0 00:04:09.925 EAL: Detected lcore 80 as core 16 on socket 0 00:04:09.925 EAL: Detected lcore 81 as core 17 on socket 0 00:04:09.925 EAL: Detected lcore 82 as core 18 on socket 0 00:04:09.925 EAL: Detected lcore 83 as core 19 on socket 0 00:04:09.925 EAL: Detected lcore 84 as core 20 on socket 0 00:04:09.925 EAL: Detected lcore 85 as core 21 on socket 0 00:04:09.925 EAL: Detected lcore 86 as core 22 on socket 0 00:04:09.925 EAL: Detected lcore 87 as core 23 on socket 0 00:04:09.925 EAL: Detected lcore 88 as core 24 on socket 0 00:04:09.925 EAL: Detected lcore 89 as core 25 on socket 0 00:04:09.925 EAL: Detected lcore 90 as core 26 on socket 0 00:04:09.925 EAL: Detected lcore 91 as core 27 on socket 0 00:04:09.925 EAL: Detected lcore 92 as core 28 on socket 0 00:04:09.925 EAL: Detected lcore 93 as core 29 on socket 0 00:04:09.925 EAL: Detected lcore 94 as core 30 on socket 0 00:04:09.925 EAL: Detected lcore 95 as core 31 on socket 0 00:04:09.925 EAL: Detected lcore 96 as core 0 on socket 1 00:04:09.925 EAL: Detected lcore 97 as core 1 on socket 1 00:04:09.925 EAL: Detected lcore 98 as core 2 on socket 1 00:04:09.925 EAL: Detected lcore 99 as core 3 on socket 1 00:04:09.925 EAL: Detected lcore 100 as core 4 on socket 1 00:04:09.925 EAL: Detected lcore 101 as core 5 on socket 1 00:04:09.925 EAL: Detected lcore 102 as core 6 on socket 1 00:04:09.925 EAL: Detected lcore 103 as core 7 on socket 1 00:04:09.925 EAL: Detected lcore 104 as core 8 on socket 1 00:04:09.925 EAL: Detected lcore 105 as core 9 on socket 1 00:04:09.925 EAL: Detected lcore 106 as core 10 on socket 1 00:04:09.925 EAL: Detected lcore 107 as core 11 on socket 1 00:04:09.925 EAL: Detected lcore 108 as core 12 on socket 1 00:04:09.925 EAL: Detected lcore 109 as core 13 on socket 1 00:04:09.925 EAL: Detected lcore 110 as core 14 on socket 1 00:04:09.925 EAL: Detected lcore 111 as core 15 on socket 1 00:04:09.925 EAL: Detected lcore 112 as core 16 on socket 1 00:04:09.925 EAL: Detected lcore 113 as core 17 on socket 1 00:04:09.925 EAL: Detected lcore 114 as core 18 on socket 1 00:04:09.925 EAL: Detected lcore 115 as core 19 on socket 1 00:04:09.925 EAL: Detected lcore 116 as core 20 on socket 1 00:04:09.925 EAL: Detected lcore 117 as core 21 on socket 1 00:04:09.925 EAL: Detected lcore 118 as core 22 on socket 1 00:04:09.925 EAL: Detected lcore 119 as core 23 on socket 1 00:04:09.925 EAL: Detected lcore 120 as core 24 on socket 1 00:04:09.925 EAL: Detected lcore 121 as core 25 on socket 1 00:04:09.926 EAL: Detected lcore 122 as core 26 on socket 1 00:04:09.926 EAL: Detected lcore 123 as core 27 on socket 1 00:04:09.926 EAL: Detected lcore 124 as core 28 on socket 1 00:04:09.926 EAL: Detected lcore 125 as core 29 on socket 1 00:04:09.926 EAL: Detected lcore 126 as core 30 on socket 1 00:04:09.926 EAL: Detected lcore 127 as core 31 on socket 1 00:04:09.926 EAL: Maximum logical cores by configuration: 128 00:04:09.926 EAL: Detected CPU lcores: 128 00:04:09.926 EAL: Detected NUMA nodes: 2 00:04:09.926 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:09.926 EAL: Detected shared linkage of DPDK 00:04:09.926 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.926 EAL: Bus pci wants IOVA as 'DC' 00:04:09.926 EAL: Buses did not request a specific IOVA mode. 00:04:09.926 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:09.926 EAL: Selected IOVA mode 'VA' 00:04:09.926 EAL: Probing VFIO support... 00:04:09.926 EAL: IOMMU type 1 (Type 1) is supported 00:04:09.926 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:09.926 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:09.926 EAL: VFIO support initialized 00:04:09.926 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.926 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.926 EAL: Setting up physically contiguous memory... 00:04:09.926 EAL: Setting maximum number of open files to 524288 00:04:09.926 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.926 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:09.926 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.926 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:09.926 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.926 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:09.926 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.926 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.926 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:09.926 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:09.926 EAL: Hugepages will be freed exactly as allocated. 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: TSC frequency is ~2600000 KHz 00:04:09.926 EAL: Main lcore 0 is ready (tid=7fc7d2c8ca00;cpuset=[0]) 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 0 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.926 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.926 00:04:09.926 00:04:09.926 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.926 http://cunit.sourceforge.net/ 00:04:09.926 00:04:09.926 00:04:09.926 Suite: components_suite 00:04:09.926 Test: vtophys_malloc_test ...passed 00:04:09.926 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.926 EAL: request: mp_malloc_sync 00:04:09.926 EAL: No shared files mode enabled, IPC is disabled 00:04:09.926 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.926 EAL: Trying to obtain current memory policy. 00:04:09.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.926 EAL: Restoring previous memory policy: 4 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.927 EAL: request: mp_malloc_sync 00:04:09.927 EAL: No shared files mode enabled, IPC is disabled 00:04:09.927 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.927 EAL: request: mp_malloc_sync 00:04:09.927 EAL: No shared files mode enabled, IPC is disabled 00:04:09.927 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.927 EAL: Trying to obtain current memory policy. 00:04:09.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.927 EAL: Restoring previous memory policy: 4 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.927 EAL: request: mp_malloc_sync 00:04:09.927 EAL: No shared files mode enabled, IPC is disabled 00:04:09.927 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.927 EAL: request: mp_malloc_sync 00:04:09.927 EAL: No shared files mode enabled, IPC is disabled 00:04:09.927 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.927 EAL: Trying to obtain current memory policy. 00:04:09.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.927 EAL: Restoring previous memory policy: 4 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.927 EAL: request: mp_malloc_sync 00:04:09.927 EAL: No shared files mode enabled, IPC is disabled 00:04:09.927 EAL: Heap on socket 0 was expanded by 258MB 00:04:09.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.186 EAL: request: mp_malloc_sync 00:04:10.186 EAL: No shared files mode enabled, IPC is disabled 00:04:10.186 EAL: Heap on socket 0 was shrunk by 258MB 00:04:10.186 EAL: Trying to obtain current memory policy. 00:04:10.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.186 EAL: Restoring previous memory policy: 4 00:04:10.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.186 EAL: request: mp_malloc_sync 00:04:10.186 EAL: No shared files mode enabled, IPC is disabled 00:04:10.186 EAL: Heap on socket 0 was expanded by 514MB 00:04:10.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.186 EAL: request: mp_malloc_sync 00:04:10.186 EAL: No shared files mode enabled, IPC is disabled 00:04:10.186 EAL: Heap on socket 0 was shrunk by 514MB 00:04:10.186 EAL: Trying to obtain current memory policy. 00:04:10.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.446 EAL: Restoring previous memory policy: 4 00:04:10.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.446 EAL: request: mp_malloc_sync 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.446 EAL: request: mp_malloc_sync 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.446 passed 00:04:10.446 00:04:10.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.446 suites 1 1 n/a 0 0 00:04:10.446 tests 2 2 2 0 0 00:04:10.446 asserts 497 497 497 0 n/a 00:04:10.446 00:04:10.446 Elapsed time = 0.640 seconds 00:04:10.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.446 EAL: request: mp_malloc_sync 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 EAL: No shared files mode enabled, IPC is disabled 00:04:10.446 00:04:10.446 real 0m0.777s 00:04:10.446 user 0m0.401s 00:04:10.446 sys 0m0.351s 00:04:10.446 16:29:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.446 16:29:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.446 ************************************ 00:04:10.446 END TEST env_vtophys 00:04:10.446 ************************************ 00:04:10.705 16:29:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.705 16:29:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.705 16:29:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.705 16:29:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.705 ************************************ 00:04:10.705 START TEST env_pci 00:04:10.705 ************************************ 00:04:10.705 16:29:02 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.705 00:04:10.705 00:04:10.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.705 http://cunit.sourceforge.net/ 00:04:10.705 00:04:10.705 00:04:10.705 Suite: pci 00:04:10.705 Test: pci_hook ...[2024-10-01 16:29:02.215795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2472780 has claimed it 00:04:10.705 EAL: Cannot find device (10000:00:01.0) 00:04:10.705 EAL: Failed to attach device on primary process 00:04:10.705 passed 00:04:10.705 00:04:10.705 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.705 suites 1 1 n/a 0 0 00:04:10.705 tests 1 1 1 0 0 00:04:10.705 asserts 25 25 25 0 n/a 00:04:10.705 00:04:10.705 Elapsed time = 0.031 seconds 00:04:10.705 00:04:10.705 real 0m0.053s 00:04:10.705 user 0m0.012s 00:04:10.705 sys 0m0.040s 00:04:10.705 16:29:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.705 16:29:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.705 ************************************ 00:04:10.705 END TEST env_pci 00:04:10.705 ************************************ 00:04:10.705 16:29:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.705 16:29:02 env -- env/env.sh@15 -- # uname 00:04:10.705 16:29:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.705 16:29:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.705 16:29:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.705 16:29:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:10.705 16:29:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.705 16:29:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.705 ************************************ 00:04:10.705 START TEST env_dpdk_post_init 00:04:10.705 ************************************ 00:04:10.705 16:29:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.705 EAL: Detected CPU lcores: 128 00:04:10.705 EAL: Detected NUMA nodes: 2 00:04:10.705 EAL: Detected shared linkage of DPDK 00:04:10.705 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.705 EAL: Selected IOVA mode 'VA' 00:04:10.705 EAL: VFIO support initialized 00:04:10.964 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.964 EAL: Using IOMMU type 1 (Type 1) 00:04:10.964 EAL: Ignore mapping IO port bar(1) 00:04:11.224 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:11.224 EAL: Ignore mapping IO port bar(1) 00:04:11.483 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:11.483 EAL: Ignore mapping IO port bar(1) 00:04:11.483 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:11.742 EAL: Ignore mapping IO port bar(1) 00:04:11.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:12.001 EAL: Ignore mapping IO port bar(1) 00:04:12.001 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:12.261 EAL: Ignore mapping IO port bar(1) 00:04:12.261 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:12.261 EAL: Ignore mapping IO port bar(1) 00:04:12.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:12.519 EAL: Ignore mapping IO port bar(1) 00:04:12.780 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:13.347 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:65:00.0 (socket 0) 00:04:13.605 EAL: Ignore mapping IO port bar(1) 00:04:13.605 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:13.863 EAL: Ignore mapping IO port bar(1) 00:04:13.863 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:14.121 EAL: Ignore mapping IO port bar(1) 00:04:14.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:14.121 EAL: Ignore mapping IO port bar(1) 00:04:14.379 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:14.379 EAL: Ignore mapping IO port bar(1) 00:04:14.637 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:14.637 EAL: Ignore mapping IO port bar(1) 00:04:14.896 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:14.896 EAL: Ignore mapping IO port bar(1) 00:04:14.896 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:15.156 EAL: Ignore mapping IO port bar(1) 00:04:15.156 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:19.357 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:19.357 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:19.357 Starting DPDK initialization... 00:04:19.357 Starting SPDK post initialization... 00:04:19.357 SPDK NVMe probe 00:04:19.357 Attaching to 0000:65:00.0 00:04:19.357 Attached to 0000:65:00.0 00:04:19.357 Cleaning up... 00:04:21.269 00:04:21.269 real 0m10.351s 00:04:21.269 user 0m3.858s 00:04:21.269 sys 0m0.516s 00:04:21.269 16:29:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.269 16:29:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.269 ************************************ 00:04:21.269 END TEST env_dpdk_post_init 00:04:21.269 ************************************ 00:04:21.269 16:29:12 env -- env/env.sh@26 -- # uname 00:04:21.269 16:29:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.269 16:29:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.269 16:29:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.269 16:29:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.269 16:29:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.269 ************************************ 00:04:21.269 START TEST env_mem_callbacks 00:04:21.270 ************************************ 00:04:21.270 16:29:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.270 EAL: Detected CPU lcores: 128 00:04:21.270 EAL: Detected NUMA nodes: 2 00:04:21.270 EAL: Detected shared linkage of DPDK 00:04:21.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.270 EAL: Selected IOVA mode 'VA' 00:04:21.270 EAL: VFIO support initialized 00:04:21.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.270 00:04:21.270 00:04:21.270 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.270 http://cunit.sourceforge.net/ 00:04:21.270 00:04:21.270 00:04:21.270 Suite: memory 00:04:21.270 Test: test ... 00:04:21.270 register 0x200000200000 2097152 00:04:21.270 malloc 3145728 00:04:21.270 register 0x200000400000 4194304 00:04:21.270 buf 0x200000500000 len 3145728 PASSED 00:04:21.270 malloc 64 00:04:21.270 buf 0x2000004fff40 len 64 PASSED 00:04:21.270 malloc 4194304 00:04:21.270 register 0x200000800000 6291456 00:04:21.270 buf 0x200000a00000 len 4194304 PASSED 00:04:21.270 free 0x200000500000 3145728 00:04:21.270 free 0x2000004fff40 64 00:04:21.270 unregister 0x200000400000 4194304 PASSED 00:04:21.270 free 0x200000a00000 4194304 00:04:21.270 unregister 0x200000800000 6291456 PASSED 00:04:21.270 malloc 8388608 00:04:21.270 register 0x200000400000 10485760 00:04:21.270 buf 0x200000600000 len 8388608 PASSED 00:04:21.270 free 0x200000600000 8388608 00:04:21.270 unregister 0x200000400000 10485760 PASSED 00:04:21.270 passed 00:04:21.270 00:04:21.270 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.270 suites 1 1 n/a 0 0 00:04:21.270 tests 1 1 1 0 0 00:04:21.270 asserts 15 15 15 0 n/a 00:04:21.270 00:04:21.270 Elapsed time = 0.008 seconds 00:04:21.270 00:04:21.270 real 0m0.062s 00:04:21.270 user 0m0.023s 00:04:21.270 sys 0m0.038s 00:04:21.270 16:29:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.270 16:29:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.270 ************************************ 00:04:21.270 END TEST env_mem_callbacks 00:04:21.270 ************************************ 00:04:21.270 00:04:21.270 real 0m12.024s 00:04:21.270 user 0m4.738s 00:04:21.270 sys 0m1.316s 00:04:21.270 16:29:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.270 16:29:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.270 ************************************ 00:04:21.270 END TEST env 00:04:21.270 ************************************ 00:04:21.270 16:29:12 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.270 16:29:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.270 16:29:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.270 16:29:12 -- common/autotest_common.sh@10 -- # set +x 00:04:21.270 ************************************ 00:04:21.270 START TEST rpc 00:04:21.270 ************************************ 00:04:21.270 16:29:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.531 * Looking for test storage... 00:04:21.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.531 16:29:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.531 16:29:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.531 16:29:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.531 16:29:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.531 16:29:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.531 16:29:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.531 16:29:13 rpc -- scripts/common.sh@345 -- # : 1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.531 16:29:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.531 16:29:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.531 16:29:13 rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.531 16:29:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.531 16:29:13 rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.531 16:29:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.531 16:29:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.531 16:29:13 rpc -- scripts/common.sh@368 -- # return 0 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:21.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.531 --rc genhtml_branch_coverage=1 00:04:21.531 --rc genhtml_function_coverage=1 00:04:21.531 --rc genhtml_legend=1 00:04:21.531 --rc geninfo_all_blocks=1 00:04:21.531 --rc geninfo_unexecuted_blocks=1 00:04:21.531 00:04:21.531 ' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:21.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.531 --rc genhtml_branch_coverage=1 00:04:21.531 --rc genhtml_function_coverage=1 00:04:21.531 --rc genhtml_legend=1 00:04:21.531 --rc geninfo_all_blocks=1 00:04:21.531 --rc geninfo_unexecuted_blocks=1 00:04:21.531 00:04:21.531 ' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:21.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.531 --rc genhtml_branch_coverage=1 00:04:21.531 --rc genhtml_function_coverage=1 00:04:21.531 --rc genhtml_legend=1 00:04:21.531 --rc geninfo_all_blocks=1 00:04:21.531 --rc geninfo_unexecuted_blocks=1 00:04:21.531 00:04:21.531 ' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:21.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.531 --rc genhtml_branch_coverage=1 00:04:21.531 --rc genhtml_function_coverage=1 00:04:21.531 --rc genhtml_legend=1 00:04:21.531 --rc geninfo_all_blocks=1 00:04:21.531 --rc geninfo_unexecuted_blocks=1 00:04:21.531 00:04:21.531 ' 00:04:21.531 16:29:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2474734 00:04:21.531 16:29:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.531 16:29:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2474734 00:04:21.531 16:29:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@831 -- # '[' -z 2474734 ']' 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.531 16:29:13 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.532 16:29:13 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.532 16:29:13 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.532 16:29:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.532 [2024-10-01 16:29:13.208001] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:21.532 [2024-10-01 16:29:13.208066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474734 ] 00:04:21.792 [2024-10-01 16:29:13.289186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.792 [2024-10-01 16:29:13.368067] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.792 [2024-10-01 16:29:13.368111] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2474734' to capture a snapshot of events at runtime. 00:04:21.792 [2024-10-01 16:29:13.368118] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.792 [2024-10-01 16:29:13.368125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.792 [2024-10-01 16:29:13.368133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2474734 for offline analysis/debug. 00:04:21.792 [2024-10-01 16:29:13.368155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.732 16:29:14 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.732 16:29:14 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:22.732 16:29:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.732 16:29:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.732 16:29:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.732 16:29:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.732 16:29:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.732 16:29:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.732 16:29:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.732 ************************************ 00:04:22.732 START TEST rpc_integrity 00:04:22.732 ************************************ 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:22.732 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.732 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.733 { 00:04:22.733 "name": "Malloc0", 00:04:22.733 "aliases": [ 00:04:22.733 "3c61cc6b-7a13-4cac-a8b4-88045071a1b9" 00:04:22.733 ], 00:04:22.733 "product_name": "Malloc disk", 00:04:22.733 "block_size": 512, 00:04:22.733 "num_blocks": 16384, 00:04:22.733 "uuid": "3c61cc6b-7a13-4cac-a8b4-88045071a1b9", 00:04:22.733 "assigned_rate_limits": { 00:04:22.733 "rw_ios_per_sec": 0, 00:04:22.733 "rw_mbytes_per_sec": 0, 00:04:22.733 "r_mbytes_per_sec": 0, 00:04:22.733 "w_mbytes_per_sec": 0 00:04:22.733 }, 00:04:22.733 "claimed": false, 00:04:22.733 "zoned": false, 00:04:22.733 "supported_io_types": { 00:04:22.733 "read": true, 00:04:22.733 "write": true, 00:04:22.733 "unmap": true, 00:04:22.733 "flush": true, 00:04:22.733 "reset": true, 00:04:22.733 "nvme_admin": false, 00:04:22.733 "nvme_io": false, 00:04:22.733 "nvme_io_md": false, 00:04:22.733 "write_zeroes": true, 00:04:22.733 "zcopy": true, 00:04:22.733 "get_zone_info": false, 00:04:22.733 "zone_management": false, 00:04:22.733 "zone_append": false, 00:04:22.733 "compare": false, 00:04:22.733 "compare_and_write": false, 00:04:22.733 "abort": true, 00:04:22.733 "seek_hole": false, 00:04:22.733 "seek_data": false, 00:04:22.733 "copy": true, 00:04:22.733 "nvme_iov_md": false 00:04:22.733 }, 00:04:22.733 "memory_domains": [ 00:04:22.733 { 00:04:22.733 "dma_device_id": "system", 00:04:22.733 "dma_device_type": 1 00:04:22.733 }, 00:04:22.733 { 00:04:22.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.733 "dma_device_type": 2 00:04:22.733 } 00:04:22.733 ], 00:04:22.733 "driver_specific": {} 00:04:22.733 } 00:04:22.733 ]' 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 [2024-10-01 16:29:14.235181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:22.733 [2024-10-01 16:29:14.235213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.733 [2024-10-01 16:29:14.235226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a3c40 00:04:22.733 [2024-10-01 16:29:14.235233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.733 [2024-10-01 16:29:14.236489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.733 [2024-10-01 16:29:14.236509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.733 Passthru0 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.733 { 00:04:22.733 "name": "Malloc0", 00:04:22.733 "aliases": [ 00:04:22.733 "3c61cc6b-7a13-4cac-a8b4-88045071a1b9" 00:04:22.733 ], 00:04:22.733 "product_name": "Malloc disk", 00:04:22.733 "block_size": 512, 00:04:22.733 "num_blocks": 16384, 00:04:22.733 "uuid": "3c61cc6b-7a13-4cac-a8b4-88045071a1b9", 00:04:22.733 "assigned_rate_limits": { 00:04:22.733 "rw_ios_per_sec": 0, 00:04:22.733 "rw_mbytes_per_sec": 0, 00:04:22.733 "r_mbytes_per_sec": 0, 00:04:22.733 "w_mbytes_per_sec": 0 00:04:22.733 }, 00:04:22.733 "claimed": true, 00:04:22.733 "claim_type": "exclusive_write", 00:04:22.733 "zoned": false, 00:04:22.733 "supported_io_types": { 00:04:22.733 "read": true, 00:04:22.733 "write": true, 00:04:22.733 "unmap": true, 00:04:22.733 "flush": true, 00:04:22.733 "reset": true, 00:04:22.733 "nvme_admin": false, 00:04:22.733 "nvme_io": false, 00:04:22.733 "nvme_io_md": false, 00:04:22.733 "write_zeroes": true, 00:04:22.733 "zcopy": true, 00:04:22.733 "get_zone_info": false, 00:04:22.733 "zone_management": false, 00:04:22.733 "zone_append": false, 00:04:22.733 "compare": false, 00:04:22.733 "compare_and_write": false, 00:04:22.733 "abort": true, 00:04:22.733 "seek_hole": false, 00:04:22.733 "seek_data": false, 00:04:22.733 "copy": true, 00:04:22.733 "nvme_iov_md": false 00:04:22.733 }, 00:04:22.733 "memory_domains": [ 00:04:22.733 { 00:04:22.733 "dma_device_id": "system", 00:04:22.733 "dma_device_type": 1 00:04:22.733 }, 00:04:22.733 { 00:04:22.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.733 "dma_device_type": 2 00:04:22.733 } 00:04:22.733 ], 00:04:22.733 "driver_specific": {} 00:04:22.733 }, 00:04:22.733 { 00:04:22.733 "name": "Passthru0", 00:04:22.733 "aliases": [ 00:04:22.733 "291629db-5057-55d2-a2f3-0eff151a9cae" 00:04:22.733 ], 00:04:22.733 "product_name": "passthru", 00:04:22.733 "block_size": 512, 00:04:22.733 "num_blocks": 16384, 00:04:22.733 "uuid": "291629db-5057-55d2-a2f3-0eff151a9cae", 00:04:22.733 "assigned_rate_limits": { 00:04:22.733 "rw_ios_per_sec": 0, 00:04:22.733 "rw_mbytes_per_sec": 0, 00:04:22.733 "r_mbytes_per_sec": 0, 00:04:22.733 "w_mbytes_per_sec": 0 00:04:22.733 }, 00:04:22.733 "claimed": false, 00:04:22.733 "zoned": false, 00:04:22.733 "supported_io_types": { 00:04:22.733 "read": true, 00:04:22.733 "write": true, 00:04:22.733 "unmap": true, 00:04:22.733 "flush": true, 00:04:22.733 "reset": true, 00:04:22.733 "nvme_admin": false, 00:04:22.733 "nvme_io": false, 00:04:22.733 "nvme_io_md": false, 00:04:22.733 "write_zeroes": true, 00:04:22.733 "zcopy": true, 00:04:22.733 "get_zone_info": false, 00:04:22.733 "zone_management": false, 00:04:22.733 "zone_append": false, 00:04:22.733 "compare": false, 00:04:22.733 "compare_and_write": false, 00:04:22.733 "abort": true, 00:04:22.733 "seek_hole": false, 00:04:22.733 "seek_data": false, 00:04:22.733 "copy": true, 00:04:22.733 "nvme_iov_md": false 00:04:22.733 }, 00:04:22.733 "memory_domains": [ 00:04:22.733 { 00:04:22.733 "dma_device_id": "system", 00:04:22.733 "dma_device_type": 1 00:04:22.733 }, 00:04:22.733 { 00:04:22.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.733 "dma_device_type": 2 00:04:22.733 } 00:04:22.733 ], 00:04:22.733 "driver_specific": { 00:04:22.733 "passthru": { 00:04:22.733 "name": "Passthru0", 00:04:22.733 "base_bdev_name": "Malloc0" 00:04:22.733 } 00:04:22.733 } 00:04:22.733 } 00:04:22.733 ]' 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.733 16:29:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.733 00:04:22.733 real 0m0.290s 00:04:22.733 user 0m0.180s 00:04:22.733 sys 0m0.046s 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.733 16:29:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.733 ************************************ 00:04:22.733 END TEST rpc_integrity 00:04:22.733 ************************************ 00:04:22.994 16:29:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 ************************************ 00:04:22.994 START TEST rpc_plugins 00:04:22.994 ************************************ 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.994 { 00:04:22.994 "name": "Malloc1", 00:04:22.994 "aliases": [ 00:04:22.994 "88c4ad0e-5132-4ef9-821a-59616de2dc58" 00:04:22.994 ], 00:04:22.994 "product_name": "Malloc disk", 00:04:22.994 "block_size": 4096, 00:04:22.994 "num_blocks": 256, 00:04:22.994 "uuid": "88c4ad0e-5132-4ef9-821a-59616de2dc58", 00:04:22.994 "assigned_rate_limits": { 00:04:22.994 "rw_ios_per_sec": 0, 00:04:22.994 "rw_mbytes_per_sec": 0, 00:04:22.994 "r_mbytes_per_sec": 0, 00:04:22.994 "w_mbytes_per_sec": 0 00:04:22.994 }, 00:04:22.994 "claimed": false, 00:04:22.994 "zoned": false, 00:04:22.994 "supported_io_types": { 00:04:22.994 "read": true, 00:04:22.994 "write": true, 00:04:22.994 "unmap": true, 00:04:22.994 "flush": true, 00:04:22.994 "reset": true, 00:04:22.994 "nvme_admin": false, 00:04:22.994 "nvme_io": false, 00:04:22.994 "nvme_io_md": false, 00:04:22.994 "write_zeroes": true, 00:04:22.994 "zcopy": true, 00:04:22.994 "get_zone_info": false, 00:04:22.994 "zone_management": false, 00:04:22.994 "zone_append": false, 00:04:22.994 "compare": false, 00:04:22.994 "compare_and_write": false, 00:04:22.994 "abort": true, 00:04:22.994 "seek_hole": false, 00:04:22.994 "seek_data": false, 00:04:22.994 "copy": true, 00:04:22.994 "nvme_iov_md": false 00:04:22.994 }, 00:04:22.994 "memory_domains": [ 00:04:22.994 { 00:04:22.994 "dma_device_id": "system", 00:04:22.994 "dma_device_type": 1 00:04:22.994 }, 00:04:22.994 { 00:04:22.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.994 "dma_device_type": 2 00:04:22.994 } 00:04:22.994 ], 00:04:22.994 "driver_specific": {} 00:04:22.994 } 00:04:22.994 ]' 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:22.994 16:29:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.994 00:04:22.994 real 0m0.148s 00:04:22.994 user 0m0.091s 00:04:22.994 sys 0m0.021s 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.994 16:29:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.994 ************************************ 00:04:22.994 END TEST rpc_plugins 00:04:22.994 ************************************ 00:04:22.994 16:29:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.994 16:29:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.255 ************************************ 00:04:23.255 START TEST rpc_trace_cmd_test 00:04:23.255 ************************************ 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.255 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2474734", 00:04:23.255 "tpoint_group_mask": "0x8", 00:04:23.255 "iscsi_conn": { 00:04:23.255 "mask": "0x2", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "scsi": { 00:04:23.255 "mask": "0x4", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "bdev": { 00:04:23.255 "mask": "0x8", 00:04:23.255 "tpoint_mask": "0xffffffffffffffff" 00:04:23.255 }, 00:04:23.255 "nvmf_rdma": { 00:04:23.255 "mask": "0x10", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "nvmf_tcp": { 00:04:23.255 "mask": "0x20", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "ftl": { 00:04:23.255 "mask": "0x40", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "blobfs": { 00:04:23.255 "mask": "0x80", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "dsa": { 00:04:23.255 "mask": "0x200", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "thread": { 00:04:23.255 "mask": "0x400", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "nvme_pcie": { 00:04:23.255 "mask": "0x800", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "iaa": { 00:04:23.255 "mask": "0x1000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "nvme_tcp": { 00:04:23.255 "mask": "0x2000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "bdev_nvme": { 00:04:23.255 "mask": "0x4000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "sock": { 00:04:23.255 "mask": "0x8000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "blob": { 00:04:23.255 "mask": "0x10000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 }, 00:04:23.255 "bdev_raid": { 00:04:23.255 "mask": "0x20000", 00:04:23.255 "tpoint_mask": "0x0" 00:04:23.255 } 00:04:23.255 }' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.255 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.516 16:29:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.516 00:04:23.516 real 0m0.250s 00:04:23.516 user 0m0.209s 00:04:23.516 sys 0m0.029s 00:04:23.516 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.516 16:29:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 ************************************ 00:04:23.516 END TEST rpc_trace_cmd_test 00:04:23.516 ************************************ 00:04:23.516 16:29:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.516 16:29:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.516 16:29:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.516 16:29:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.516 16:29:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.516 16:29:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 ************************************ 00:04:23.516 START TEST rpc_daemon_integrity 00:04:23.516 ************************************ 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.516 { 00:04:23.516 "name": "Malloc2", 00:04:23.516 "aliases": [ 00:04:23.516 "d717aa31-c52b-4b9e-8ac6-0a11641c0f4b" 00:04:23.516 ], 00:04:23.516 "product_name": "Malloc disk", 00:04:23.516 "block_size": 512, 00:04:23.516 "num_blocks": 16384, 00:04:23.516 "uuid": "d717aa31-c52b-4b9e-8ac6-0a11641c0f4b", 00:04:23.516 "assigned_rate_limits": { 00:04:23.516 "rw_ios_per_sec": 0, 00:04:23.516 "rw_mbytes_per_sec": 0, 00:04:23.516 "r_mbytes_per_sec": 0, 00:04:23.516 "w_mbytes_per_sec": 0 00:04:23.516 }, 00:04:23.516 "claimed": false, 00:04:23.516 "zoned": false, 00:04:23.516 "supported_io_types": { 00:04:23.516 "read": true, 00:04:23.516 "write": true, 00:04:23.516 "unmap": true, 00:04:23.516 "flush": true, 00:04:23.516 "reset": true, 00:04:23.516 "nvme_admin": false, 00:04:23.516 "nvme_io": false, 00:04:23.516 "nvme_io_md": false, 00:04:23.516 "write_zeroes": true, 00:04:23.516 "zcopy": true, 00:04:23.516 "get_zone_info": false, 00:04:23.516 "zone_management": false, 00:04:23.516 "zone_append": false, 00:04:23.516 "compare": false, 00:04:23.516 "compare_and_write": false, 00:04:23.516 "abort": true, 00:04:23.516 "seek_hole": false, 00:04:23.516 "seek_data": false, 00:04:23.516 "copy": true, 00:04:23.516 "nvme_iov_md": false 00:04:23.516 }, 00:04:23.516 "memory_domains": [ 00:04:23.516 { 00:04:23.516 "dma_device_id": "system", 00:04:23.516 "dma_device_type": 1 00:04:23.516 }, 00:04:23.516 { 00:04:23.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.516 "dma_device_type": 2 00:04:23.516 } 00:04:23.516 ], 00:04:23.516 "driver_specific": {} 00:04:23.516 } 00:04:23.516 ]' 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 [2024-10-01 16:29:15.153663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.516 [2024-10-01 16:29:15.153689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.516 [2024-10-01 16:29:15.153703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a3870 00:04:23.516 [2024-10-01 16:29:15.153710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.516 [2024-10-01 16:29:15.154872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.516 [2024-10-01 16:29:15.154891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.516 Passthru0 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.516 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.516 { 00:04:23.516 "name": "Malloc2", 00:04:23.516 "aliases": [ 00:04:23.516 "d717aa31-c52b-4b9e-8ac6-0a11641c0f4b" 00:04:23.516 ], 00:04:23.516 "product_name": "Malloc disk", 00:04:23.516 "block_size": 512, 00:04:23.516 "num_blocks": 16384, 00:04:23.516 "uuid": "d717aa31-c52b-4b9e-8ac6-0a11641c0f4b", 00:04:23.516 "assigned_rate_limits": { 00:04:23.516 "rw_ios_per_sec": 0, 00:04:23.516 "rw_mbytes_per_sec": 0, 00:04:23.516 "r_mbytes_per_sec": 0, 00:04:23.516 "w_mbytes_per_sec": 0 00:04:23.516 }, 00:04:23.516 "claimed": true, 00:04:23.516 "claim_type": "exclusive_write", 00:04:23.516 "zoned": false, 00:04:23.516 "supported_io_types": { 00:04:23.516 "read": true, 00:04:23.516 "write": true, 00:04:23.516 "unmap": true, 00:04:23.516 "flush": true, 00:04:23.516 "reset": true, 00:04:23.516 "nvme_admin": false, 00:04:23.516 "nvme_io": false, 00:04:23.516 "nvme_io_md": false, 00:04:23.516 "write_zeroes": true, 00:04:23.516 "zcopy": true, 00:04:23.516 "get_zone_info": false, 00:04:23.516 "zone_management": false, 00:04:23.516 "zone_append": false, 00:04:23.516 "compare": false, 00:04:23.516 "compare_and_write": false, 00:04:23.516 "abort": true, 00:04:23.516 "seek_hole": false, 00:04:23.516 "seek_data": false, 00:04:23.516 "copy": true, 00:04:23.516 "nvme_iov_md": false 00:04:23.516 }, 00:04:23.516 "memory_domains": [ 00:04:23.516 { 00:04:23.516 "dma_device_id": "system", 00:04:23.516 "dma_device_type": 1 00:04:23.516 }, 00:04:23.516 { 00:04:23.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.516 "dma_device_type": 2 00:04:23.516 } 00:04:23.516 ], 00:04:23.516 "driver_specific": {} 00:04:23.516 }, 00:04:23.516 { 00:04:23.516 "name": "Passthru0", 00:04:23.516 "aliases": [ 00:04:23.516 "26432852-ede4-503c-89d7-4c8fb5a7d70a" 00:04:23.516 ], 00:04:23.516 "product_name": "passthru", 00:04:23.516 "block_size": 512, 00:04:23.516 "num_blocks": 16384, 00:04:23.516 "uuid": "26432852-ede4-503c-89d7-4c8fb5a7d70a", 00:04:23.516 "assigned_rate_limits": { 00:04:23.516 "rw_ios_per_sec": 0, 00:04:23.516 "rw_mbytes_per_sec": 0, 00:04:23.516 "r_mbytes_per_sec": 0, 00:04:23.516 "w_mbytes_per_sec": 0 00:04:23.516 }, 00:04:23.516 "claimed": false, 00:04:23.516 "zoned": false, 00:04:23.516 "supported_io_types": { 00:04:23.516 "read": true, 00:04:23.516 "write": true, 00:04:23.516 "unmap": true, 00:04:23.516 "flush": true, 00:04:23.516 "reset": true, 00:04:23.516 "nvme_admin": false, 00:04:23.516 "nvme_io": false, 00:04:23.516 "nvme_io_md": false, 00:04:23.516 "write_zeroes": true, 00:04:23.516 "zcopy": true, 00:04:23.516 "get_zone_info": false, 00:04:23.517 "zone_management": false, 00:04:23.517 "zone_append": false, 00:04:23.517 "compare": false, 00:04:23.517 "compare_and_write": false, 00:04:23.517 "abort": true, 00:04:23.517 "seek_hole": false, 00:04:23.517 "seek_data": false, 00:04:23.517 "copy": true, 00:04:23.517 "nvme_iov_md": false 00:04:23.517 }, 00:04:23.517 "memory_domains": [ 00:04:23.517 { 00:04:23.517 "dma_device_id": "system", 00:04:23.517 "dma_device_type": 1 00:04:23.517 }, 00:04:23.517 { 00:04:23.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.517 "dma_device_type": 2 00:04:23.517 } 00:04:23.517 ], 00:04:23.517 "driver_specific": { 00:04:23.517 "passthru": { 00:04:23.517 "name": "Passthru0", 00:04:23.517 "base_bdev_name": "Malloc2" 00:04:23.517 } 00:04:23.517 } 00:04:23.517 } 00:04:23.517 ]' 00:04:23.517 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.778 00:04:23.778 real 0m0.305s 00:04:23.778 user 0m0.200s 00:04:23.778 sys 0m0.033s 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.778 16:29:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.778 ************************************ 00:04:23.778 END TEST rpc_daemon_integrity 00:04:23.778 ************************************ 00:04:23.778 16:29:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.778 16:29:15 rpc -- rpc/rpc.sh@84 -- # killprocess 2474734 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@950 -- # '[' -z 2474734 ']' 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@954 -- # kill -0 2474734 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@955 -- # uname 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2474734 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2474734' 00:04:23.778 killing process with pid 2474734 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@969 -- # kill 2474734 00:04:23.778 16:29:15 rpc -- common/autotest_common.sh@974 -- # wait 2474734 00:04:24.038 00:04:24.038 real 0m2.694s 00:04:24.038 user 0m3.524s 00:04:24.038 sys 0m0.761s 00:04:24.038 16:29:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.038 16:29:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.038 ************************************ 00:04:24.038 END TEST rpc 00:04:24.038 ************************************ 00:04:24.038 16:29:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.038 16:29:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.038 16:29:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.038 16:29:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.038 ************************************ 00:04:24.038 START TEST skip_rpc 00:04:24.038 ************************************ 00:04:24.039 16:29:15 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:24.299 * Looking for test storage... 00:04:24.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.299 16:29:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.299 --rc genhtml_branch_coverage=1 00:04:24.299 --rc genhtml_function_coverage=1 00:04:24.299 --rc genhtml_legend=1 00:04:24.299 --rc geninfo_all_blocks=1 00:04:24.299 --rc geninfo_unexecuted_blocks=1 00:04:24.299 00:04:24.299 ' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.299 --rc genhtml_branch_coverage=1 00:04:24.299 --rc genhtml_function_coverage=1 00:04:24.299 --rc genhtml_legend=1 00:04:24.299 --rc geninfo_all_blocks=1 00:04:24.299 --rc geninfo_unexecuted_blocks=1 00:04:24.299 00:04:24.299 ' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.299 --rc genhtml_branch_coverage=1 00:04:24.299 --rc genhtml_function_coverage=1 00:04:24.299 --rc genhtml_legend=1 00:04:24.299 --rc geninfo_all_blocks=1 00:04:24.299 --rc geninfo_unexecuted_blocks=1 00:04:24.299 00:04:24.299 ' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:24.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.299 --rc genhtml_branch_coverage=1 00:04:24.299 --rc genhtml_function_coverage=1 00:04:24.299 --rc genhtml_legend=1 00:04:24.299 --rc geninfo_all_blocks=1 00:04:24.299 --rc geninfo_unexecuted_blocks=1 00:04:24.299 00:04:24.299 ' 00:04:24.299 16:29:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.299 16:29:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.299 16:29:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.299 16:29:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.299 ************************************ 00:04:24.299 START TEST skip_rpc 00:04:24.299 ************************************ 00:04:24.299 16:29:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:24.299 16:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2475519 00:04:24.299 16:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.299 16:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:24.299 16:29:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:24.560 [2024-10-01 16:29:16.016644] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:24.560 [2024-10-01 16:29:16.016710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475519 ] 00:04:24.560 [2024-10-01 16:29:16.095751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.560 [2024-10-01 16:29:16.172894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2475519 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2475519 ']' 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2475519 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.848 16:29:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2475519 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2475519' 00:04:29.848 killing process with pid 2475519 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2475519 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2475519 00:04:29.848 00:04:29.848 real 0m5.293s 00:04:29.848 user 0m5.085s 00:04:29.848 sys 0m0.255s 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.848 16:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.848 ************************************ 00:04:29.848 END TEST skip_rpc 00:04:29.848 ************************************ 00:04:29.848 16:29:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.848 16:29:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.848 16:29:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.848 16:29:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.848 ************************************ 00:04:29.848 START TEST skip_rpc_with_json 00:04:29.848 ************************************ 00:04:29.848 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:29.848 16:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.848 16:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2476464 00:04:29.848 16:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2476464 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2476464 ']' 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.849 16:29:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.849 [2024-10-01 16:29:21.396357] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:29.849 [2024-10-01 16:29:21.396407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476464 ] 00:04:29.849 [2024-10-01 16:29:21.473840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.109 [2024-10-01 16:29:21.536668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.679 [2024-10-01 16:29:22.251466] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.679 request: 00:04:30.679 { 00:04:30.679 "trtype": "tcp", 00:04:30.679 "method": "nvmf_get_transports", 00:04:30.679 "req_id": 1 00:04:30.679 } 00:04:30.679 Got JSON-RPC error response 00:04:30.679 response: 00:04:30.679 { 00:04:30.679 "code": -19, 00:04:30.679 "message": "No such device" 00:04:30.679 } 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.679 [2024-10-01 16:29:22.263584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.679 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.680 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.941 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.941 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.941 { 00:04:30.941 "subsystems": [ 00:04:30.941 { 00:04:30.941 "subsystem": "fsdev", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "fsdev_set_opts", 00:04:30.941 "params": { 00:04:30.941 "fsdev_io_pool_size": 65535, 00:04:30.941 "fsdev_io_cache_size": 256 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "vfio_user_target", 00:04:30.941 "config": null 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "keyring", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "iobuf", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "iobuf_set_options", 00:04:30.941 "params": { 00:04:30.941 "small_pool_count": 8192, 00:04:30.941 "large_pool_count": 1024, 00:04:30.941 "small_bufsize": 8192, 00:04:30.941 "large_bufsize": 135168 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "sock", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "sock_set_default_impl", 00:04:30.941 "params": { 00:04:30.941 "impl_name": "posix" 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "sock_impl_set_options", 00:04:30.941 "params": { 00:04:30.941 "impl_name": "ssl", 00:04:30.941 "recv_buf_size": 4096, 00:04:30.941 "send_buf_size": 4096, 00:04:30.941 "enable_recv_pipe": true, 00:04:30.941 "enable_quickack": false, 00:04:30.941 "enable_placement_id": 0, 00:04:30.941 "enable_zerocopy_send_server": true, 00:04:30.941 "enable_zerocopy_send_client": false, 00:04:30.941 "zerocopy_threshold": 0, 00:04:30.941 "tls_version": 0, 00:04:30.941 "enable_ktls": false 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "sock_impl_set_options", 00:04:30.941 "params": { 00:04:30.941 "impl_name": "posix", 00:04:30.941 "recv_buf_size": 2097152, 00:04:30.941 "send_buf_size": 2097152, 00:04:30.941 "enable_recv_pipe": true, 00:04:30.941 "enable_quickack": false, 00:04:30.941 "enable_placement_id": 0, 00:04:30.941 "enable_zerocopy_send_server": true, 00:04:30.941 "enable_zerocopy_send_client": false, 00:04:30.941 "zerocopy_threshold": 0, 00:04:30.941 "tls_version": 0, 00:04:30.941 "enable_ktls": false 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "vmd", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "accel", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "accel_set_options", 00:04:30.941 "params": { 00:04:30.941 "small_cache_size": 128, 00:04:30.941 "large_cache_size": 16, 00:04:30.941 "task_count": 2048, 00:04:30.941 "sequence_count": 2048, 00:04:30.941 "buf_count": 2048 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "bdev", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "bdev_set_options", 00:04:30.941 "params": { 00:04:30.941 "bdev_io_pool_size": 65535, 00:04:30.941 "bdev_io_cache_size": 256, 00:04:30.941 "bdev_auto_examine": true, 00:04:30.941 "iobuf_small_cache_size": 128, 00:04:30.941 "iobuf_large_cache_size": 16 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "bdev_raid_set_options", 00:04:30.941 "params": { 00:04:30.941 "process_window_size_kb": 1024, 00:04:30.941 "process_max_bandwidth_mb_sec": 0 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "bdev_iscsi_set_options", 00:04:30.941 "params": { 00:04:30.941 "timeout_sec": 30 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "bdev_nvme_set_options", 00:04:30.941 "params": { 00:04:30.941 "action_on_timeout": "none", 00:04:30.941 "timeout_us": 0, 00:04:30.941 "timeout_admin_us": 0, 00:04:30.941 "keep_alive_timeout_ms": 10000, 00:04:30.941 "arbitration_burst": 0, 00:04:30.941 "low_priority_weight": 0, 00:04:30.941 "medium_priority_weight": 0, 00:04:30.941 "high_priority_weight": 0, 00:04:30.941 "nvme_adminq_poll_period_us": 10000, 00:04:30.941 "nvme_ioq_poll_period_us": 0, 00:04:30.941 "io_queue_requests": 0, 00:04:30.941 "delay_cmd_submit": true, 00:04:30.941 "transport_retry_count": 4, 00:04:30.941 "bdev_retry_count": 3, 00:04:30.941 "transport_ack_timeout": 0, 00:04:30.941 "ctrlr_loss_timeout_sec": 0, 00:04:30.941 "reconnect_delay_sec": 0, 00:04:30.941 "fast_io_fail_timeout_sec": 0, 00:04:30.941 "disable_auto_failback": false, 00:04:30.941 "generate_uuids": false, 00:04:30.941 "transport_tos": 0, 00:04:30.941 "nvme_error_stat": false, 00:04:30.941 "rdma_srq_size": 0, 00:04:30.941 "io_path_stat": false, 00:04:30.941 "allow_accel_sequence": false, 00:04:30.941 "rdma_max_cq_size": 0, 00:04:30.941 "rdma_cm_event_timeout_ms": 0, 00:04:30.941 "dhchap_digests": [ 00:04:30.941 "sha256", 00:04:30.941 "sha384", 00:04:30.941 "sha512" 00:04:30.941 ], 00:04:30.941 "dhchap_dhgroups": [ 00:04:30.941 "null", 00:04:30.941 "ffdhe2048", 00:04:30.941 "ffdhe3072", 00:04:30.941 "ffdhe4096", 00:04:30.941 "ffdhe6144", 00:04:30.941 "ffdhe8192" 00:04:30.941 ] 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "bdev_nvme_set_hotplug", 00:04:30.941 "params": { 00:04:30.941 "period_us": 100000, 00:04:30.941 "enable": false 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "bdev_wait_for_examine" 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "scsi", 00:04:30.941 "config": null 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "scheduler", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "framework_set_scheduler", 00:04:30.941 "params": { 00:04:30.941 "name": "static" 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "vhost_scsi", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "vhost_blk", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "ublk", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "nbd", 00:04:30.941 "config": [] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "nvmf", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.941 "method": "nvmf_set_config", 00:04:30.941 "params": { 00:04:30.941 "discovery_filter": "match_any", 00:04:30.941 "admin_cmd_passthru": { 00:04:30.941 "identify_ctrlr": false 00:04:30.941 }, 00:04:30.941 "dhchap_digests": [ 00:04:30.941 "sha256", 00:04:30.941 "sha384", 00:04:30.941 "sha512" 00:04:30.941 ], 00:04:30.941 "dhchap_dhgroups": [ 00:04:30.941 "null", 00:04:30.941 "ffdhe2048", 00:04:30.941 "ffdhe3072", 00:04:30.941 "ffdhe4096", 00:04:30.941 "ffdhe6144", 00:04:30.941 "ffdhe8192" 00:04:30.941 ] 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "nvmf_set_max_subsystems", 00:04:30.941 "params": { 00:04:30.941 "max_subsystems": 1024 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "nvmf_set_crdt", 00:04:30.941 "params": { 00:04:30.941 "crdt1": 0, 00:04:30.941 "crdt2": 0, 00:04:30.941 "crdt3": 0 00:04:30.941 } 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "method": "nvmf_create_transport", 00:04:30.941 "params": { 00:04:30.941 "trtype": "TCP", 00:04:30.941 "max_queue_depth": 128, 00:04:30.941 "max_io_qpairs_per_ctrlr": 127, 00:04:30.941 "in_capsule_data_size": 4096, 00:04:30.941 "max_io_size": 131072, 00:04:30.941 "io_unit_size": 131072, 00:04:30.941 "max_aq_depth": 128, 00:04:30.941 "num_shared_buffers": 511, 00:04:30.941 "buf_cache_size": 4294967295, 00:04:30.941 "dif_insert_or_strip": false, 00:04:30.941 "zcopy": false, 00:04:30.941 "c2h_success": true, 00:04:30.941 "sock_priority": 0, 00:04:30.941 "abort_timeout_sec": 1, 00:04:30.941 "ack_timeout": 0, 00:04:30.941 "data_wr_pool_size": 0 00:04:30.941 } 00:04:30.941 } 00:04:30.941 ] 00:04:30.941 }, 00:04:30.941 { 00:04:30.941 "subsystem": "iscsi", 00:04:30.941 "config": [ 00:04:30.941 { 00:04:30.942 "method": "iscsi_set_options", 00:04:30.942 "params": { 00:04:30.942 "node_base": "iqn.2016-06.io.spdk", 00:04:30.942 "max_sessions": 128, 00:04:30.942 "max_connections_per_session": 2, 00:04:30.942 "max_queue_depth": 64, 00:04:30.942 "default_time2wait": 2, 00:04:30.942 "default_time2retain": 20, 00:04:30.942 "first_burst_length": 8192, 00:04:30.942 "immediate_data": true, 00:04:30.942 "allow_duplicated_isid": false, 00:04:30.942 "error_recovery_level": 0, 00:04:30.942 "nop_timeout": 60, 00:04:30.942 "nop_in_interval": 30, 00:04:30.942 "disable_chap": false, 00:04:30.942 "require_chap": false, 00:04:30.942 "mutual_chap": false, 00:04:30.942 "chap_group": 0, 00:04:30.942 "max_large_datain_per_connection": 64, 00:04:30.942 "max_r2t_per_connection": 4, 00:04:30.942 "pdu_pool_size": 36864, 00:04:30.942 "immediate_data_pool_size": 16384, 00:04:30.942 "data_out_pool_size": 2048 00:04:30.942 } 00:04:30.942 } 00:04:30.942 ] 00:04:30.942 } 00:04:30.942 ] 00:04:30.942 } 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2476464 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2476464 ']' 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2476464 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2476464 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2476464' 00:04:30.942 killing process with pid 2476464 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2476464 00:04:30.942 16:29:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2476464 00:04:31.202 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2476711 00:04:31.202 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.202 16:29:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2476711 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2476711 ']' 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2476711 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2476711 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2476711' 00:04:36.485 killing process with pid 2476711 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2476711 00:04:36.485 16:29:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2476711 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.485 00:04:36.485 real 0m6.687s 00:04:36.485 user 0m6.645s 00:04:36.485 sys 0m0.578s 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.485 ************************************ 00:04:36.485 END TEST skip_rpc_with_json 00:04:36.485 ************************************ 00:04:36.485 16:29:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:36.485 16:29:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.485 16:29:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.485 16:29:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.485 ************************************ 00:04:36.485 START TEST skip_rpc_with_delay 00:04:36.485 ************************************ 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.485 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:36.486 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:36.486 [2024-10-01 16:29:28.155778] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:36.486 [2024-10-01 16:29:28.155870] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:36.746 00:04:36.746 real 0m0.077s 00:04:36.746 user 0m0.049s 00:04:36.746 sys 0m0.028s 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.746 16:29:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:36.746 ************************************ 00:04:36.746 END TEST skip_rpc_with_delay 00:04:36.746 ************************************ 00:04:36.746 16:29:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:36.746 16:29:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:36.746 16:29:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:36.746 16:29:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.746 16:29:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.746 16:29:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.746 ************************************ 00:04:36.746 START TEST exit_on_failed_rpc_init 00:04:36.746 ************************************ 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2477701 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2477701 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2477701 ']' 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.746 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.747 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.747 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.747 [2024-10-01 16:29:28.312058] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:36.747 [2024-10-01 16:29:28.312104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477701 ] 00:04:36.747 [2024-10-01 16:29:28.387551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.006 [2024-10-01 16:29:28.451600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.006 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.267 [2024-10-01 16:29:28.703640] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:37.268 [2024-10-01 16:29:28.703688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477760 ] 00:04:37.268 [2024-10-01 16:29:28.752952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.268 [2024-10-01 16:29:28.807087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.268 [2024-10-01 16:29:28.807139] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.268 [2024-10-01 16:29:28.807147] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.268 [2024-10-01 16:29:28.807153] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2477701 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2477701 ']' 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2477701 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2477701 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2477701' 00:04:37.268 killing process with pid 2477701 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2477701 00:04:37.268 16:29:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2477701 00:04:37.529 00:04:37.529 real 0m0.878s 00:04:37.529 user 0m1.025s 00:04:37.529 sys 0m0.338s 00:04:37.529 16:29:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.529 16:29:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.529 ************************************ 00:04:37.529 END TEST exit_on_failed_rpc_init 00:04:37.529 ************************************ 00:04:37.529 16:29:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.529 00:04:37.529 real 0m13.461s 00:04:37.529 user 0m13.039s 00:04:37.529 sys 0m1.520s 00:04:37.529 16:29:29 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.529 16:29:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.529 ************************************ 00:04:37.529 END TEST skip_rpc 00:04:37.529 ************************************ 00:04:37.789 16:29:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.789 16:29:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.789 16:29:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.789 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.789 ************************************ 00:04:37.789 START TEST rpc_client 00:04:37.789 ************************************ 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.789 * Looking for test storage... 00:04:37.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.789 16:29:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.789 16:29:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.790 --rc genhtml_branch_coverage=1 00:04:37.790 --rc genhtml_function_coverage=1 00:04:37.790 --rc genhtml_legend=1 00:04:37.790 --rc geninfo_all_blocks=1 00:04:37.790 --rc geninfo_unexecuted_blocks=1 00:04:37.790 00:04:37.790 ' 00:04:37.790 16:29:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.790 --rc genhtml_branch_coverage=1 00:04:37.790 --rc genhtml_function_coverage=1 00:04:37.790 --rc genhtml_legend=1 00:04:37.790 --rc geninfo_all_blocks=1 00:04:37.790 --rc geninfo_unexecuted_blocks=1 00:04:37.790 00:04:37.790 ' 00:04:37.790 16:29:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.790 --rc genhtml_branch_coverage=1 00:04:37.790 --rc genhtml_function_coverage=1 00:04:37.790 --rc genhtml_legend=1 00:04:37.790 --rc geninfo_all_blocks=1 00:04:37.790 --rc geninfo_unexecuted_blocks=1 00:04:37.790 00:04:37.790 ' 00:04:37.790 16:29:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.790 --rc genhtml_branch_coverage=1 00:04:37.790 --rc genhtml_function_coverage=1 00:04:37.790 --rc genhtml_legend=1 00:04:37.790 --rc geninfo_all_blocks=1 00:04:37.790 --rc geninfo_unexecuted_blocks=1 00:04:37.790 00:04:37.790 ' 00:04:37.790 16:29:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:37.790 OK 00:04:38.050 16:29:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.050 00:04:38.050 real 0m0.224s 00:04:38.050 user 0m0.142s 00:04:38.050 sys 0m0.095s 00:04:38.050 16:29:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.050 16:29:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 END TEST rpc_client 00:04:38.050 ************************************ 00:04:38.050 16:29:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.050 16:29:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.050 16:29:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.050 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 START TEST json_config 00:04:38.050 ************************************ 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.050 16:29:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.050 16:29:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.050 16:29:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.050 16:29:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.050 16:29:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.050 16:29:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:38.050 16:29:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.050 16:29:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.050 16:29:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.050 16:29:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.050 16:29:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.050 16:29:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.050 16:29:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.050 16:29:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.050 16:29:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.050 --rc genhtml_branch_coverage=1 00:04:38.050 --rc genhtml_function_coverage=1 00:04:38.050 --rc genhtml_legend=1 00:04:38.050 --rc geninfo_all_blocks=1 00:04:38.050 --rc geninfo_unexecuted_blocks=1 00:04:38.050 00:04:38.050 ' 00:04:38.050 16:29:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.050 --rc genhtml_branch_coverage=1 00:04:38.050 --rc genhtml_function_coverage=1 00:04:38.050 --rc genhtml_legend=1 00:04:38.050 --rc geninfo_all_blocks=1 00:04:38.050 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 16:29:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 16:29:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 16:29:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.051 16:29:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.313 16:29:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.313 16:29:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.313 16:29:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.313 16:29:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.313 16:29:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.313 16:29:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.313 16:29:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.313 16:29:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:38.313 16:29:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.313 16:29:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.314 16:29:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:38.314 INFO: JSON configuration test init 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.314 16:29:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:38.314 16:29:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:38.314 16:29:29 json_config -- json_config/common.sh@10 -- # shift 00:04:38.314 16:29:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.314 16:29:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.314 16:29:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.314 16:29:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.314 16:29:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.314 16:29:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2477936 00:04:38.314 16:29:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.314 Waiting for target to run... 00:04:38.314 16:29:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2477936 /var/tmp/spdk_tgt.sock 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 2477936 ']' 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.314 16:29:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.314 16:29:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.314 [2024-10-01 16:29:29.841002] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:38.314 [2024-10-01 16:29:29.841075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477936 ] 00:04:38.575 [2024-10-01 16:29:30.155808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.575 [2024-10-01 16:29:30.206810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:39.146 16:29:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:39.146 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.146 16:29:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:39.146 16:29:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:39.146 16:29:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.446 16:29:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.446 16:29:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:42.446 16:29:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.447 16:29:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.447 16:29:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@54 -- # sort 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:42.447 16:29:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:42.447 16:29:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:42.447 16:29:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:42.707 16:29:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.707 16:29:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.707 16:29:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.707 MallocForNvmf0 00:04:42.707 16:29:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.707 16:29:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.967 MallocForNvmf1 00:04:42.967 16:29:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.967 16:29:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.227 [2024-10-01 16:29:34.743433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.227 16:29:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.227 16:29:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.487 16:29:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.487 16:29:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.487 16:29:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.487 16:29:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.747 16:29:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.747 16:29:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.007 [2024-10-01 16:29:35.529853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.007 16:29:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:44.007 16:29:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.007 16:29:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.007 16:29:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:44.007 16:29:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.007 16:29:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.007 16:29:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:44.007 16:29:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.007 16:29:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.267 MallocBdevForConfigChangeCheck 00:04:44.267 16:29:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:44.267 16:29:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.267 16:29:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.267 16:29:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:44.267 16:29:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.526 16:29:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:44.526 INFO: shutting down applications... 00:04:44.526 16:29:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:44.526 16:29:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:44.526 16:29:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:44.526 16:29:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:47.822 Calling clear_iscsi_subsystem 00:04:47.822 Calling clear_nvmf_subsystem 00:04:47.822 Calling clear_nbd_subsystem 00:04:47.822 Calling clear_ublk_subsystem 00:04:47.822 Calling clear_vhost_blk_subsystem 00:04:47.822 Calling clear_vhost_scsi_subsystem 00:04:47.822 Calling clear_bdev_subsystem 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.822 16:29:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:47.822 16:29:39 json_config -- json_config/json_config.sh@352 -- # break 00:04:47.822 16:29:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:47.822 16:29:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:47.822 16:29:39 json_config -- json_config/common.sh@31 -- # local app=target 00:04:47.822 16:29:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.822 16:29:39 json_config -- json_config/common.sh@35 -- # [[ -n 2477936 ]] 00:04:47.822 16:29:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2477936 00:04:47.822 16:29:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.822 16:29:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.822 16:29:39 json_config -- json_config/common.sh@41 -- # kill -0 2477936 00:04:47.822 16:29:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.082 16:29:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.082 16:29:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.082 16:29:39 json_config -- json_config/common.sh@41 -- # kill -0 2477936 00:04:48.082 16:29:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.082 16:29:39 json_config -- json_config/common.sh@43 -- # break 00:04:48.082 16:29:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.082 16:29:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.082 SPDK target shutdown done 00:04:48.082 16:29:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:48.082 INFO: relaunching applications... 00:04:48.082 16:29:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.082 16:29:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.082 16:29:39 json_config -- json_config/common.sh@10 -- # shift 00:04:48.082 16:29:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.082 16:29:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.082 16:29:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.082 16:29:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.082 16:29:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.082 16:29:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2479850 00:04:48.082 16:29:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.082 Waiting for target to run... 00:04:48.082 16:29:39 json_config -- json_config/common.sh@25 -- # waitforlisten 2479850 /var/tmp/spdk_tgt.sock 00:04:48.082 16:29:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.082 16:29:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 2479850 ']' 00:04:48.082 16:29:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.083 16:29:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.083 16:29:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.083 16:29:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.083 16:29:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.083 [2024-10-01 16:29:39.689318] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:48.083 [2024-10-01 16:29:39.689391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479850 ] 00:04:48.342 [2024-10-01 16:29:40.024647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.602 [2024-10-01 16:29:40.080091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.891 [2024-10-01 16:29:43.121047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.891 [2024-10-01 16:29:43.153392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.891 16:29:43 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.891 16:29:43 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:51.891 16:29:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.891 00:04:51.891 16:29:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:51.891 16:29:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:51.891 INFO: Checking if target configuration is the same... 00:04:51.891 16:29:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.891 16:29:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:51.891 16:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.891 + '[' 2 -ne 2 ']' 00:04:51.891 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:51.891 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:51.891 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:51.891 +++ basename /dev/fd/62 00:04:51.891 ++ mktemp /tmp/62.XXX 00:04:51.891 + tmp_file_1=/tmp/62.GYA 00:04:51.891 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.891 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.891 + tmp_file_2=/tmp/spdk_tgt_config.json.PkN 00:04:51.891 + ret=0 00:04:51.891 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.891 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.150 + diff -u /tmp/62.GYA /tmp/spdk_tgt_config.json.PkN 00:04:52.150 + echo 'INFO: JSON config files are the same' 00:04:52.150 INFO: JSON config files are the same 00:04:52.150 + rm /tmp/62.GYA /tmp/spdk_tgt_config.json.PkN 00:04:52.150 + exit 0 00:04:52.150 16:29:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:52.150 16:29:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:52.150 INFO: changing configuration and checking if this can be detected... 00:04:52.150 16:29:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.150 16:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.150 16:29:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.150 16:29:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:52.150 16:29:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.150 + '[' 2 -ne 2 ']' 00:04:52.150 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:52.150 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:52.150 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.150 +++ basename /dev/fd/62 00:04:52.150 ++ mktemp /tmp/62.XXX 00:04:52.150 + tmp_file_1=/tmp/62.Tmz 00:04:52.150 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.150 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.150 + tmp_file_2=/tmp/spdk_tgt_config.json.As0 00:04:52.150 + ret=0 00:04:52.150 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.775 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.775 + diff -u /tmp/62.Tmz /tmp/spdk_tgt_config.json.As0 00:04:52.775 + ret=1 00:04:52.775 + echo '=== Start of file: /tmp/62.Tmz ===' 00:04:52.775 + cat /tmp/62.Tmz 00:04:52.775 + echo '=== End of file: /tmp/62.Tmz ===' 00:04:52.775 + echo '' 00:04:52.775 + echo '=== Start of file: /tmp/spdk_tgt_config.json.As0 ===' 00:04:52.775 + cat /tmp/spdk_tgt_config.json.As0 00:04:52.775 + echo '=== End of file: /tmp/spdk_tgt_config.json.As0 ===' 00:04:52.775 + echo '' 00:04:52.775 + rm /tmp/62.Tmz /tmp/spdk_tgt_config.json.As0 00:04:52.775 + exit 1 00:04:52.775 16:29:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:52.775 INFO: configuration change detected. 00:04:52.775 16:29:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:52.775 16:29:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:52.775 16:29:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.775 16:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.775 16:29:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:52.775 16:29:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 2479850 ]] 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.776 16:29:44 json_config -- json_config/json_config.sh@330 -- # killprocess 2479850 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@950 -- # '[' -z 2479850 ']' 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@954 -- # kill -0 2479850 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@955 -- # uname 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2479850 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2479850' 00:04:52.776 killing process with pid 2479850 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@969 -- # kill 2479850 00:04:52.776 16:29:44 json_config -- common/autotest_common.sh@974 -- # wait 2479850 00:04:55.313 16:29:46 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.313 16:29:46 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:55.313 16:29:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.313 16:29:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 16:29:46 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:55.313 16:29:46 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:55.313 INFO: Success 00:04:55.313 00:04:55.313 real 0m17.139s 00:04:55.313 user 0m17.849s 00:04:55.313 sys 0m2.513s 00:04:55.313 16:29:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.313 16:29:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 END TEST json_config 00:04:55.313 ************************************ 00:04:55.313 16:29:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.313 16:29:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.313 16:29:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.313 16:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.313 ************************************ 00:04:55.313 START TEST json_config_extra_key 00:04:55.313 ************************************ 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.313 --rc genhtml_branch_coverage=1 00:04:55.313 --rc genhtml_function_coverage=1 00:04:55.313 --rc genhtml_legend=1 00:04:55.313 --rc geninfo_all_blocks=1 00:04:55.313 --rc geninfo_unexecuted_blocks=1 00:04:55.313 00:04:55.313 ' 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.313 --rc genhtml_branch_coverage=1 00:04:55.313 --rc genhtml_function_coverage=1 00:04:55.313 --rc genhtml_legend=1 00:04:55.313 --rc geninfo_all_blocks=1 00:04:55.313 --rc geninfo_unexecuted_blocks=1 00:04:55.313 00:04:55.313 ' 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.313 --rc genhtml_branch_coverage=1 00:04:55.313 --rc genhtml_function_coverage=1 00:04:55.313 --rc genhtml_legend=1 00:04:55.313 --rc geninfo_all_blocks=1 00:04:55.313 --rc geninfo_unexecuted_blocks=1 00:04:55.313 00:04:55.313 ' 00:04:55.313 16:29:46 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:55.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.313 --rc genhtml_branch_coverage=1 00:04:55.313 --rc genhtml_function_coverage=1 00:04:55.313 --rc genhtml_legend=1 00:04:55.313 --rc geninfo_all_blocks=1 00:04:55.313 --rc geninfo_unexecuted_blocks=1 00:04:55.313 00:04:55.313 ' 00:04:55.313 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.313 16:29:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.313 16:29:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.313 16:29:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.313 16:29:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.313 16:29:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.313 16:29:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.313 16:29:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.314 16:29:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.314 INFO: launching applications... 00:04:55.314 16:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2481199 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.314 Waiting for target to run... 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2481199 /var/tmp/spdk_tgt.sock 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2481199 ']' 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.314 16:29:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.314 16:29:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.573 [2024-10-01 16:29:47.028878] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:55.574 [2024-10-01 16:29:47.028927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481199 ] 00:04:55.834 [2024-10-01 16:29:47.315634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.834 [2024-10-01 16:29:47.367354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.405 16:29:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.405 16:29:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.405 00:04:56.405 16:29:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.405 INFO: shutting down applications... 00:04:56.405 16:29:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2481199 ]] 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2481199 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2481199 00:04:56.405 16:29:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2481199 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.667 16:29:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.667 SPDK target shutdown done 00:04:56.667 16:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:56.667 Success 00:04:56.667 00:04:56.667 real 0m1.578s 00:04:56.667 user 0m1.237s 00:04:56.667 sys 0m0.410s 00:04:56.667 16:29:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.667 16:29:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.667 ************************************ 00:04:56.667 END TEST json_config_extra_key 00:04:56.667 ************************************ 00:04:56.929 16:29:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.929 16:29:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.929 16:29:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.929 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.929 ************************************ 00:04:56.929 START TEST alias_rpc 00:04:56.929 ************************************ 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.929 * Looking for test storage... 00:04:56.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.929 16:29:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.929 --rc genhtml_branch_coverage=1 00:04:56.929 --rc genhtml_function_coverage=1 00:04:56.929 --rc genhtml_legend=1 00:04:56.929 --rc geninfo_all_blocks=1 00:04:56.929 --rc geninfo_unexecuted_blocks=1 00:04:56.929 00:04:56.929 ' 00:04:56.929 16:29:48 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:56.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.929 --rc genhtml_branch_coverage=1 00:04:56.929 --rc genhtml_function_coverage=1 00:04:56.929 --rc genhtml_legend=1 00:04:56.929 --rc geninfo_all_blocks=1 00:04:56.929 --rc geninfo_unexecuted_blocks=1 00:04:56.929 00:04:56.929 ' 00:04:56.930 16:29:48 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.930 --rc genhtml_branch_coverage=1 00:04:56.930 --rc genhtml_function_coverage=1 00:04:56.930 --rc genhtml_legend=1 00:04:56.930 --rc geninfo_all_blocks=1 00:04:56.930 --rc geninfo_unexecuted_blocks=1 00:04:56.930 00:04:56.930 ' 00:04:56.930 16:29:48 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.930 --rc genhtml_branch_coverage=1 00:04:56.930 --rc genhtml_function_coverage=1 00:04:56.930 --rc genhtml_legend=1 00:04:56.930 --rc geninfo_all_blocks=1 00:04:56.930 --rc geninfo_unexecuted_blocks=1 00:04:56.930 00:04:56.930 ' 00:04:56.930 16:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.190 16:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2481561 00:04:57.190 16:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2481561 00:04:57.190 16:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2481561 ']' 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.190 16:29:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.190 [2024-10-01 16:29:48.667271] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:57.190 [2024-10-01 16:29:48.667321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481561 ] 00:04:57.190 [2024-10-01 16:29:48.744094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.190 [2024-10-01 16:29:48.806633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:58.131 16:29:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.131 16:29:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2481561 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2481561 ']' 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2481561 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.131 16:29:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2481561 00:04:58.392 16:29:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.392 16:29:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.392 16:29:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2481561' 00:04:58.392 killing process with pid 2481561 00:04:58.392 16:29:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 2481561 00:04:58.392 16:29:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 2481561 00:04:58.392 00:04:58.392 real 0m1.631s 00:04:58.392 user 0m1.891s 00:04:58.392 sys 0m0.411s 00:04:58.392 16:29:50 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.392 16:29:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.392 ************************************ 00:04:58.392 END TEST alias_rpc 00:04:58.392 ************************************ 00:04:58.652 16:29:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.652 16:29:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.652 16:29:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.652 16:29:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.652 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:58.652 ************************************ 00:04:58.652 START TEST spdkcli_tcp 00:04:58.652 ************************************ 00:04:58.652 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.652 * Looking for test storage... 00:04:58.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:58.652 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.653 16:29:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:58.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.653 --rc genhtml_branch_coverage=1 00:04:58.653 --rc genhtml_function_coverage=1 00:04:58.653 --rc genhtml_legend=1 00:04:58.653 --rc geninfo_all_blocks=1 00:04:58.653 --rc geninfo_unexecuted_blocks=1 00:04:58.653 00:04:58.653 ' 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2481928 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2481928 00:04:58.653 16:29:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2481928 ']' 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.653 16:29:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.913 [2024-10-01 16:29:50.391530] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:04:58.913 [2024-10-01 16:29:50.391598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481928 ] 00:04:58.913 [2024-10-01 16:29:50.471648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.913 [2024-10-01 16:29:50.551830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.913 [2024-10-01 16:29:50.551836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.852 16:29:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.852 16:29:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:59.852 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.852 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2481959 00:04:59.852 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.852 [ 00:04:59.852 "bdev_malloc_delete", 00:04:59.852 "bdev_malloc_create", 00:04:59.852 "bdev_null_resize", 00:04:59.852 "bdev_null_delete", 00:04:59.852 "bdev_null_create", 00:04:59.852 "bdev_nvme_cuse_unregister", 00:04:59.852 "bdev_nvme_cuse_register", 00:04:59.852 "bdev_opal_new_user", 00:04:59.852 "bdev_opal_set_lock_state", 00:04:59.852 "bdev_opal_delete", 00:04:59.852 "bdev_opal_get_info", 00:04:59.852 "bdev_opal_create", 00:04:59.852 "bdev_nvme_opal_revert", 00:04:59.852 "bdev_nvme_opal_init", 00:04:59.852 "bdev_nvme_send_cmd", 00:04:59.852 "bdev_nvme_set_keys", 00:04:59.852 "bdev_nvme_get_path_iostat", 00:04:59.852 "bdev_nvme_get_mdns_discovery_info", 00:04:59.852 "bdev_nvme_stop_mdns_discovery", 00:04:59.852 "bdev_nvme_start_mdns_discovery", 00:04:59.852 "bdev_nvme_set_multipath_policy", 00:04:59.852 "bdev_nvme_set_preferred_path", 00:04:59.852 "bdev_nvme_get_io_paths", 00:04:59.852 "bdev_nvme_remove_error_injection", 00:04:59.852 "bdev_nvme_add_error_injection", 00:04:59.852 "bdev_nvme_get_discovery_info", 00:04:59.852 "bdev_nvme_stop_discovery", 00:04:59.852 "bdev_nvme_start_discovery", 00:04:59.852 "bdev_nvme_get_controller_health_info", 00:04:59.852 "bdev_nvme_disable_controller", 00:04:59.852 "bdev_nvme_enable_controller", 00:04:59.852 "bdev_nvme_reset_controller", 00:04:59.852 "bdev_nvme_get_transport_statistics", 00:04:59.852 "bdev_nvme_apply_firmware", 00:04:59.852 "bdev_nvme_detach_controller", 00:04:59.852 "bdev_nvme_get_controllers", 00:04:59.852 "bdev_nvme_attach_controller", 00:04:59.852 "bdev_nvme_set_hotplug", 00:04:59.852 "bdev_nvme_set_options", 00:04:59.852 "bdev_passthru_delete", 00:04:59.852 "bdev_passthru_create", 00:04:59.852 "bdev_lvol_set_parent_bdev", 00:04:59.852 "bdev_lvol_set_parent", 00:04:59.852 "bdev_lvol_check_shallow_copy", 00:04:59.852 "bdev_lvol_start_shallow_copy", 00:04:59.852 "bdev_lvol_grow_lvstore", 00:04:59.852 "bdev_lvol_get_lvols", 00:04:59.852 "bdev_lvol_get_lvstores", 00:04:59.852 "bdev_lvol_delete", 00:04:59.852 "bdev_lvol_set_read_only", 00:04:59.852 "bdev_lvol_resize", 00:04:59.852 "bdev_lvol_decouple_parent", 00:04:59.852 "bdev_lvol_inflate", 00:04:59.852 "bdev_lvol_rename", 00:04:59.852 "bdev_lvol_clone_bdev", 00:04:59.852 "bdev_lvol_clone", 00:04:59.852 "bdev_lvol_snapshot", 00:04:59.852 "bdev_lvol_create", 00:04:59.852 "bdev_lvol_delete_lvstore", 00:04:59.852 "bdev_lvol_rename_lvstore", 00:04:59.852 "bdev_lvol_create_lvstore", 00:04:59.852 "bdev_raid_set_options", 00:04:59.852 "bdev_raid_remove_base_bdev", 00:04:59.852 "bdev_raid_add_base_bdev", 00:04:59.852 "bdev_raid_delete", 00:04:59.852 "bdev_raid_create", 00:04:59.852 "bdev_raid_get_bdevs", 00:04:59.852 "bdev_error_inject_error", 00:04:59.852 "bdev_error_delete", 00:04:59.852 "bdev_error_create", 00:04:59.852 "bdev_split_delete", 00:04:59.852 "bdev_split_create", 00:04:59.852 "bdev_delay_delete", 00:04:59.852 "bdev_delay_create", 00:04:59.852 "bdev_delay_update_latency", 00:04:59.852 "bdev_zone_block_delete", 00:04:59.852 "bdev_zone_block_create", 00:04:59.852 "blobfs_create", 00:04:59.852 "blobfs_detect", 00:04:59.852 "blobfs_set_cache_size", 00:04:59.852 "bdev_aio_delete", 00:04:59.852 "bdev_aio_rescan", 00:04:59.852 "bdev_aio_create", 00:04:59.852 "bdev_ftl_set_property", 00:04:59.852 "bdev_ftl_get_properties", 00:04:59.852 "bdev_ftl_get_stats", 00:04:59.852 "bdev_ftl_unmap", 00:04:59.852 "bdev_ftl_unload", 00:04:59.852 "bdev_ftl_delete", 00:04:59.852 "bdev_ftl_load", 00:04:59.852 "bdev_ftl_create", 00:04:59.852 "bdev_virtio_attach_controller", 00:04:59.852 "bdev_virtio_scsi_get_devices", 00:04:59.852 "bdev_virtio_detach_controller", 00:04:59.852 "bdev_virtio_blk_set_hotplug", 00:04:59.852 "bdev_iscsi_delete", 00:04:59.852 "bdev_iscsi_create", 00:04:59.852 "bdev_iscsi_set_options", 00:04:59.852 "accel_error_inject_error", 00:04:59.852 "ioat_scan_accel_module", 00:04:59.852 "dsa_scan_accel_module", 00:04:59.852 "iaa_scan_accel_module", 00:04:59.852 "vfu_virtio_create_fs_endpoint", 00:04:59.852 "vfu_virtio_create_scsi_endpoint", 00:04:59.852 "vfu_virtio_scsi_remove_target", 00:04:59.852 "vfu_virtio_scsi_add_target", 00:04:59.852 "vfu_virtio_create_blk_endpoint", 00:04:59.852 "vfu_virtio_delete_endpoint", 00:04:59.852 "keyring_file_remove_key", 00:04:59.852 "keyring_file_add_key", 00:04:59.852 "keyring_linux_set_options", 00:04:59.852 "fsdev_aio_delete", 00:04:59.852 "fsdev_aio_create", 00:04:59.852 "iscsi_get_histogram", 00:04:59.852 "iscsi_enable_histogram", 00:04:59.852 "iscsi_set_options", 00:04:59.852 "iscsi_get_auth_groups", 00:04:59.852 "iscsi_auth_group_remove_secret", 00:04:59.852 "iscsi_auth_group_add_secret", 00:04:59.852 "iscsi_delete_auth_group", 00:04:59.852 "iscsi_create_auth_group", 00:04:59.852 "iscsi_set_discovery_auth", 00:04:59.852 "iscsi_get_options", 00:04:59.852 "iscsi_target_node_request_logout", 00:04:59.852 "iscsi_target_node_set_redirect", 00:04:59.852 "iscsi_target_node_set_auth", 00:04:59.852 "iscsi_target_node_add_lun", 00:04:59.852 "iscsi_get_stats", 00:04:59.852 "iscsi_get_connections", 00:04:59.852 "iscsi_portal_group_set_auth", 00:04:59.852 "iscsi_start_portal_group", 00:04:59.852 "iscsi_delete_portal_group", 00:04:59.852 "iscsi_create_portal_group", 00:04:59.852 "iscsi_get_portal_groups", 00:04:59.852 "iscsi_delete_target_node", 00:04:59.852 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.852 "iscsi_target_node_add_pg_ig_maps", 00:04:59.853 "iscsi_create_target_node", 00:04:59.853 "iscsi_get_target_nodes", 00:04:59.853 "iscsi_delete_initiator_group", 00:04:59.853 "iscsi_initiator_group_remove_initiators", 00:04:59.853 "iscsi_initiator_group_add_initiators", 00:04:59.853 "iscsi_create_initiator_group", 00:04:59.853 "iscsi_get_initiator_groups", 00:04:59.853 "nvmf_set_crdt", 00:04:59.853 "nvmf_set_config", 00:04:59.853 "nvmf_set_max_subsystems", 00:04:59.853 "nvmf_stop_mdns_prr", 00:04:59.853 "nvmf_publish_mdns_prr", 00:04:59.853 "nvmf_subsystem_get_listeners", 00:04:59.853 "nvmf_subsystem_get_qpairs", 00:04:59.853 "nvmf_subsystem_get_controllers", 00:04:59.853 "nvmf_get_stats", 00:04:59.853 "nvmf_get_transports", 00:04:59.853 "nvmf_create_transport", 00:04:59.853 "nvmf_get_targets", 00:04:59.853 "nvmf_delete_target", 00:04:59.853 "nvmf_create_target", 00:04:59.853 "nvmf_subsystem_allow_any_host", 00:04:59.853 "nvmf_subsystem_set_keys", 00:04:59.853 "nvmf_subsystem_remove_host", 00:04:59.853 "nvmf_subsystem_add_host", 00:04:59.853 "nvmf_ns_remove_host", 00:04:59.853 "nvmf_ns_add_host", 00:04:59.853 "nvmf_subsystem_remove_ns", 00:04:59.853 "nvmf_subsystem_set_ns_ana_group", 00:04:59.853 "nvmf_subsystem_add_ns", 00:04:59.853 "nvmf_subsystem_listener_set_ana_state", 00:04:59.853 "nvmf_discovery_get_referrals", 00:04:59.853 "nvmf_discovery_remove_referral", 00:04:59.853 "nvmf_discovery_add_referral", 00:04:59.853 "nvmf_subsystem_remove_listener", 00:04:59.853 "nvmf_subsystem_add_listener", 00:04:59.853 "nvmf_delete_subsystem", 00:04:59.853 "nvmf_create_subsystem", 00:04:59.853 "nvmf_get_subsystems", 00:04:59.853 "env_dpdk_get_mem_stats", 00:04:59.853 "nbd_get_disks", 00:04:59.853 "nbd_stop_disk", 00:04:59.853 "nbd_start_disk", 00:04:59.853 "ublk_recover_disk", 00:04:59.853 "ublk_get_disks", 00:04:59.853 "ublk_stop_disk", 00:04:59.853 "ublk_start_disk", 00:04:59.853 "ublk_destroy_target", 00:04:59.853 "ublk_create_target", 00:04:59.853 "virtio_blk_create_transport", 00:04:59.853 "virtio_blk_get_transports", 00:04:59.853 "vhost_controller_set_coalescing", 00:04:59.853 "vhost_get_controllers", 00:04:59.853 "vhost_delete_controller", 00:04:59.853 "vhost_create_blk_controller", 00:04:59.853 "vhost_scsi_controller_remove_target", 00:04:59.853 "vhost_scsi_controller_add_target", 00:04:59.853 "vhost_start_scsi_controller", 00:04:59.853 "vhost_create_scsi_controller", 00:04:59.853 "thread_set_cpumask", 00:04:59.853 "scheduler_set_options", 00:04:59.853 "framework_get_governor", 00:04:59.853 "framework_get_scheduler", 00:04:59.853 "framework_set_scheduler", 00:04:59.853 "framework_get_reactors", 00:04:59.853 "thread_get_io_channels", 00:04:59.853 "thread_get_pollers", 00:04:59.853 "thread_get_stats", 00:04:59.853 "framework_monitor_context_switch", 00:04:59.853 "spdk_kill_instance", 00:04:59.853 "log_enable_timestamps", 00:04:59.853 "log_get_flags", 00:04:59.853 "log_clear_flag", 00:04:59.853 "log_set_flag", 00:04:59.853 "log_get_level", 00:04:59.853 "log_set_level", 00:04:59.853 "log_get_print_level", 00:04:59.853 "log_set_print_level", 00:04:59.853 "framework_enable_cpumask_locks", 00:04:59.853 "framework_disable_cpumask_locks", 00:04:59.853 "framework_wait_init", 00:04:59.853 "framework_start_init", 00:04:59.853 "scsi_get_devices", 00:04:59.853 "bdev_get_histogram", 00:04:59.853 "bdev_enable_histogram", 00:04:59.853 "bdev_set_qos_limit", 00:04:59.853 "bdev_set_qd_sampling_period", 00:04:59.853 "bdev_get_bdevs", 00:04:59.853 "bdev_reset_iostat", 00:04:59.853 "bdev_get_iostat", 00:04:59.853 "bdev_examine", 00:04:59.853 "bdev_wait_for_examine", 00:04:59.853 "bdev_set_options", 00:04:59.853 "accel_get_stats", 00:04:59.853 "accel_set_options", 00:04:59.853 "accel_set_driver", 00:04:59.853 "accel_crypto_key_destroy", 00:04:59.853 "accel_crypto_keys_get", 00:04:59.853 "accel_crypto_key_create", 00:04:59.853 "accel_assign_opc", 00:04:59.853 "accel_get_module_info", 00:04:59.853 "accel_get_opc_assignments", 00:04:59.853 "vmd_rescan", 00:04:59.853 "vmd_remove_device", 00:04:59.853 "vmd_enable", 00:04:59.853 "sock_get_default_impl", 00:04:59.853 "sock_set_default_impl", 00:04:59.853 "sock_impl_set_options", 00:04:59.853 "sock_impl_get_options", 00:04:59.853 "iobuf_get_stats", 00:04:59.853 "iobuf_set_options", 00:04:59.853 "keyring_get_keys", 00:04:59.853 "vfu_tgt_set_base_path", 00:04:59.853 "framework_get_pci_devices", 00:04:59.853 "framework_get_config", 00:04:59.853 "framework_get_subsystems", 00:04:59.853 "fsdev_set_opts", 00:04:59.853 "fsdev_get_opts", 00:04:59.853 "trace_get_info", 00:04:59.853 "trace_get_tpoint_group_mask", 00:04:59.853 "trace_disable_tpoint_group", 00:04:59.853 "trace_enable_tpoint_group", 00:04:59.853 "trace_clear_tpoint_mask", 00:04:59.853 "trace_set_tpoint_mask", 00:04:59.853 "notify_get_notifications", 00:04:59.853 "notify_get_types", 00:04:59.853 "spdk_get_version", 00:04:59.853 "rpc_get_methods" 00:04:59.853 ] 00:04:59.853 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.853 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.853 16:29:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2481928 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2481928 ']' 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2481928 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.853 16:29:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2481928 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2481928' 00:05:00.114 killing process with pid 2481928 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2481928 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2481928 00:05:00.114 00:05:00.114 real 0m1.629s 00:05:00.114 user 0m2.985s 00:05:00.114 sys 0m0.475s 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.114 16:29:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.114 ************************************ 00:05:00.114 END TEST spdkcli_tcp 00:05:00.114 ************************************ 00:05:00.114 16:29:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.375 16:29:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.375 16:29:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.375 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:05:00.375 ************************************ 00:05:00.375 START TEST dpdk_mem_utility 00:05:00.375 ************************************ 00:05:00.375 16:29:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.375 * Looking for test storage... 00:05:00.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:00.375 16:29:51 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.375 16:29:51 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.375 16:29:51 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.375 16:29:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.375 --rc genhtml_branch_coverage=1 00:05:00.375 --rc genhtml_function_coverage=1 00:05:00.375 --rc genhtml_legend=1 00:05:00.375 --rc geninfo_all_blocks=1 00:05:00.375 --rc geninfo_unexecuted_blocks=1 00:05:00.375 00:05:00.375 ' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.375 --rc genhtml_branch_coverage=1 00:05:00.375 --rc genhtml_function_coverage=1 00:05:00.375 --rc genhtml_legend=1 00:05:00.375 --rc geninfo_all_blocks=1 00:05:00.375 --rc geninfo_unexecuted_blocks=1 00:05:00.375 00:05:00.375 ' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.375 --rc genhtml_branch_coverage=1 00:05:00.375 --rc genhtml_function_coverage=1 00:05:00.375 --rc genhtml_legend=1 00:05:00.375 --rc geninfo_all_blocks=1 00:05:00.375 --rc geninfo_unexecuted_blocks=1 00:05:00.375 00:05:00.375 ' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.375 --rc genhtml_branch_coverage=1 00:05:00.375 --rc genhtml_function_coverage=1 00:05:00.375 --rc genhtml_legend=1 00:05:00.375 --rc geninfo_all_blocks=1 00:05:00.375 --rc geninfo_unexecuted_blocks=1 00:05:00.375 00:05:00.375 ' 00:05:00.375 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.375 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2482310 00:05:00.375 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2482310 00:05:00.375 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2482310 ']' 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.375 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.376 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.376 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.636 [2024-10-01 16:29:52.096265] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:00.636 [2024-10-01 16:29:52.096337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482310 ] 00:05:00.636 [2024-10-01 16:29:52.174439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.637 [2024-10-01 16:29:52.238432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.578 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.578 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:01.578 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.578 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.578 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.578 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.578 { 00:05:01.578 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.578 } 00:05:01.578 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.578 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.578 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:01.578 1 heaps totaling size 860.000000 MiB 00:05:01.578 size: 860.000000 MiB heap id: 0 00:05:01.578 end heaps---------- 00:05:01.578 9 mempools totaling size 642.649841 MiB 00:05:01.578 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.578 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.578 size: 92.545471 MiB name: bdev_io_2482310 00:05:01.578 size: 51.011292 MiB name: evtpool_2482310 00:05:01.578 size: 50.003479 MiB name: msgpool_2482310 00:05:01.578 size: 36.509338 MiB name: fsdev_io_2482310 00:05:01.578 size: 21.763794 MiB name: PDU_Pool 00:05:01.578 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.578 size: 0.026123 MiB name: Session_Pool 00:05:01.578 end mempools------- 00:05:01.578 6 memzones totaling size 4.142822 MiB 00:05:01.578 size: 1.000366 MiB name: RG_ring_0_2482310 00:05:01.578 size: 1.000366 MiB name: RG_ring_1_2482310 00:05:01.578 size: 1.000366 MiB name: RG_ring_4_2482310 00:05:01.578 size: 1.000366 MiB name: RG_ring_5_2482310 00:05:01.578 size: 0.125366 MiB name: RG_ring_2_2482310 00:05:01.578 size: 0.015991 MiB name: RG_ring_3_2482310 00:05:01.578 end memzones------- 00:05:01.578 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.578 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:01.578 list of free elements. size: 13.984680 MiB 00:05:01.578 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:01.578 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:01.578 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:01.578 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:01.578 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:01.578 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:01.578 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:01.578 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:01.578 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:01.578 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:01.578 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:01.579 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:01.579 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:01.579 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:01.579 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:01.579 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:01.579 list of standard malloc elements. size: 199.218628 MiB 00:05:01.579 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:01.579 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:01.579 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:01.579 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:01.579 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:01.579 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:01.579 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:01.579 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:01.579 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:01.579 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:01.579 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:01.579 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:01.579 list of memzone associated elements. size: 646.796692 MiB 00:05:01.579 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:01.579 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.579 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:01.579 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.579 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:01.579 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2482310_0 00:05:01.579 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:01.579 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2482310_0 00:05:01.579 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:01.579 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2482310_0 00:05:01.579 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:01.579 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2482310_0 00:05:01.579 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:01.579 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.579 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:01.579 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.579 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:01.579 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2482310 00:05:01.579 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:01.579 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2482310 00:05:01.579 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:01.579 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2482310 00:05:01.579 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:01.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.579 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:01.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.579 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:01.579 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.579 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:01.579 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.579 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:01.579 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2482310 00:05:01.579 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:01.579 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2482310 00:05:01.579 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:01.579 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2482310 00:05:01.579 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:01.579 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2482310 00:05:01.579 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:01.579 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2482310 00:05:01.579 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:01.579 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2482310 00:05:01.579 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:01.579 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.579 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:01.579 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.579 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:01.579 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.579 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:05:01.579 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2482310 00:05:01.579 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:01.579 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.579 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:01.579 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.579 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:05:01.579 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2482310 00:05:01.579 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:01.579 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.579 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:01.579 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2482310 00:05:01.579 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:01.579 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2482310 00:05:01.579 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:01.579 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2482310 00:05:01.579 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:01.579 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.579 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.579 16:29:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2482310 00:05:01.579 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2482310 ']' 00:05:01.579 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2482310 00:05:01.579 16:29:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2482310 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2482310' 00:05:01.579 killing process with pid 2482310 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2482310 00:05:01.579 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2482310 00:05:01.839 00:05:01.839 real 0m1.446s 00:05:01.839 user 0m1.536s 00:05:01.839 sys 0m0.415s 00:05:01.839 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.839 16:29:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.839 ************************************ 00:05:01.839 END TEST dpdk_mem_utility 00:05:01.839 ************************************ 00:05:01.840 16:29:53 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:01.840 16:29:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.840 16:29:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.840 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:05:01.840 ************************************ 00:05:01.840 START TEST event 00:05:01.840 ************************************ 00:05:01.840 16:29:53 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:01.840 * Looking for test storage... 00:05:01.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:01.840 16:29:53 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.840 16:29:53 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.840 16:29:53 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.100 16:29:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.100 16:29:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.100 16:29:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.100 16:29:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.100 16:29:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.100 16:29:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.100 16:29:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.100 16:29:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.100 16:29:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.100 16:29:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.100 16:29:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.100 16:29:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:02.100 16:29:53 event -- scripts/common.sh@345 -- # : 1 00:05:02.100 16:29:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.100 16:29:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.100 16:29:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:02.100 16:29:53 event -- scripts/common.sh@353 -- # local d=1 00:05:02.100 16:29:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.100 16:29:53 event -- scripts/common.sh@355 -- # echo 1 00:05:02.100 16:29:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.100 16:29:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:02.100 16:29:53 event -- scripts/common.sh@353 -- # local d=2 00:05:02.100 16:29:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.100 16:29:53 event -- scripts/common.sh@355 -- # echo 2 00:05:02.100 16:29:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.100 16:29:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.100 16:29:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.100 16:29:53 event -- scripts/common.sh@368 -- # return 0 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.100 --rc genhtml_branch_coverage=1 00:05:02.100 --rc genhtml_function_coverage=1 00:05:02.100 --rc genhtml_legend=1 00:05:02.100 --rc geninfo_all_blocks=1 00:05:02.100 --rc geninfo_unexecuted_blocks=1 00:05:02.100 00:05:02.100 ' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.100 --rc genhtml_branch_coverage=1 00:05:02.100 --rc genhtml_function_coverage=1 00:05:02.100 --rc genhtml_legend=1 00:05:02.100 --rc geninfo_all_blocks=1 00:05:02.100 --rc geninfo_unexecuted_blocks=1 00:05:02.100 00:05:02.100 ' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.100 --rc genhtml_branch_coverage=1 00:05:02.100 --rc genhtml_function_coverage=1 00:05:02.100 --rc genhtml_legend=1 00:05:02.100 --rc geninfo_all_blocks=1 00:05:02.100 --rc geninfo_unexecuted_blocks=1 00:05:02.100 00:05:02.100 ' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.100 --rc genhtml_branch_coverage=1 00:05:02.100 --rc genhtml_function_coverage=1 00:05:02.100 --rc genhtml_legend=1 00:05:02.100 --rc geninfo_all_blocks=1 00:05:02.100 --rc geninfo_unexecuted_blocks=1 00:05:02.100 00:05:02.100 ' 00:05:02.100 16:29:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.100 16:29:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.100 16:29:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:02.100 16:29:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.100 16:29:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.100 ************************************ 00:05:02.100 START TEST event_perf 00:05:02.100 ************************************ 00:05:02.100 16:29:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.100 Running I/O for 1 seconds...[2024-10-01 16:29:53.608670] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:02.100 [2024-10-01 16:29:53.608782] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482680 ] 00:05:02.100 [2024-10-01 16:29:53.703015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.361 [2024-10-01 16:29:53.783362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.361 [2024-10-01 16:29:53.783477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.361 [2024-10-01 16:29:53.783597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.361 [2024-10-01 16:29:53.783601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.302 Running I/O for 1 seconds... 00:05:03.302 lcore 0: 214609 00:05:03.302 lcore 1: 214605 00:05:03.302 lcore 2: 214605 00:05:03.302 lcore 3: 214607 00:05:03.302 done. 00:05:03.302 00:05:03.302 real 0m1.248s 00:05:03.302 user 0m4.147s 00:05:03.302 sys 0m0.099s 00:05:03.302 16:29:54 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.302 16:29:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.302 ************************************ 00:05:03.302 END TEST event_perf 00:05:03.302 ************************************ 00:05:03.302 16:29:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.302 16:29:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:03.302 16:29:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.302 16:29:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.302 ************************************ 00:05:03.302 START TEST event_reactor 00:05:03.302 ************************************ 00:05:03.302 16:29:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.302 [2024-10-01 16:29:54.934834] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:03.302 [2024-10-01 16:29:54.934939] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482769 ] 00:05:03.563 [2024-10-01 16:29:55.015142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.563 [2024-10-01 16:29:55.092489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.502 test_start 00:05:04.502 oneshot 00:05:04.502 tick 100 00:05:04.502 tick 100 00:05:04.502 tick 250 00:05:04.502 tick 100 00:05:04.502 tick 100 00:05:04.502 tick 100 00:05:04.502 tick 250 00:05:04.502 tick 500 00:05:04.502 tick 100 00:05:04.502 tick 100 00:05:04.502 tick 250 00:05:04.502 tick 100 00:05:04.502 tick 100 00:05:04.502 test_end 00:05:04.502 00:05:04.502 real 0m1.229s 00:05:04.502 user 0m1.139s 00:05:04.502 sys 0m0.085s 00:05:04.502 16:29:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.502 16:29:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.502 ************************************ 00:05:04.502 END TEST event_reactor 00:05:04.502 ************************************ 00:05:04.502 16:29:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.502 16:29:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:04.502 16:29:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.502 16:29:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 ************************************ 00:05:04.762 START TEST event_reactor_perf 00:05:04.762 ************************************ 00:05:04.762 16:29:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.762 [2024-10-01 16:29:56.239563] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:04.762 [2024-10-01 16:29:56.239660] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483040 ] 00:05:04.762 [2024-10-01 16:29:56.322401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.762 [2024-10-01 16:29:56.399928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.146 test_start 00:05:06.146 test_end 00:05:06.146 Performance: 400068 events per second 00:05:06.146 00:05:06.146 real 0m1.233s 00:05:06.146 user 0m1.133s 00:05:06.146 sys 0m0.095s 00:05:06.146 16:29:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.146 16:29:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.146 ************************************ 00:05:06.146 END TEST event_reactor_perf 00:05:06.146 ************************************ 00:05:06.146 16:29:57 event -- event/event.sh@49 -- # uname -s 00:05:06.146 16:29:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.146 16:29:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.146 16:29:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.146 16:29:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.146 16:29:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.146 ************************************ 00:05:06.146 START TEST event_scheduler 00:05:06.146 ************************************ 00:05:06.146 16:29:57 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.146 * Looking for test storage... 00:05:06.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:06.146 16:29:57 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.146 16:29:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.146 16:29:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.146 16:29:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.146 16:29:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.147 --rc genhtml_branch_coverage=1 00:05:06.147 --rc genhtml_function_coverage=1 00:05:06.147 --rc genhtml_legend=1 00:05:06.147 --rc geninfo_all_blocks=1 00:05:06.147 --rc geninfo_unexecuted_blocks=1 00:05:06.147 00:05:06.147 ' 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.147 --rc genhtml_branch_coverage=1 00:05:06.147 --rc genhtml_function_coverage=1 00:05:06.147 --rc genhtml_legend=1 00:05:06.147 --rc geninfo_all_blocks=1 00:05:06.147 --rc geninfo_unexecuted_blocks=1 00:05:06.147 00:05:06.147 ' 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.147 --rc genhtml_branch_coverage=1 00:05:06.147 --rc genhtml_function_coverage=1 00:05:06.147 --rc genhtml_legend=1 00:05:06.147 --rc geninfo_all_blocks=1 00:05:06.147 --rc geninfo_unexecuted_blocks=1 00:05:06.147 00:05:06.147 ' 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.147 --rc genhtml_branch_coverage=1 00:05:06.147 --rc genhtml_function_coverage=1 00:05:06.147 --rc genhtml_legend=1 00:05:06.147 --rc geninfo_all_blocks=1 00:05:06.147 --rc geninfo_unexecuted_blocks=1 00:05:06.147 00:05:06.147 ' 00:05:06.147 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.147 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2483400 00:05:06.147 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.147 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2483400 00:05:06.147 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2483400 ']' 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.147 16:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.147 [2024-10-01 16:29:57.783337] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:06.147 [2024-10-01 16:29:57.783401] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483400 ] 00:05:06.408 [2024-10-01 16:29:57.839256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.408 [2024-10-01 16:29:57.906057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.408 [2024-10-01 16:29:57.906178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.408 [2024-10-01 16:29:57.906297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.408 [2024-10-01 16:29:57.906300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.408 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 [2024-10-01 16:29:57.950764] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:06.408 [2024-10-01 16:29:57.950778] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.408 [2024-10-01 16:29:57.950785] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.408 [2024-10-01 16:29:57.950790] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.408 [2024-10-01 16:29:57.950794] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 [2024-10-01 16:29:58.005489] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.408 16:29:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.408 16:29:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.408 16:29:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 ************************************ 00:05:06.408 START TEST scheduler_create_thread 00:05:06.408 ************************************ 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 2 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 3 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 4 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 5 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.408 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.408 6 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.670 7 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.670 8 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.670 9 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:06.670 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.671 10 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.671 16:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.613 16:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.613 16:29:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.613 16:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.613 16:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.996 16:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.996 16:30:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.996 16:30:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.996 16:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.996 16:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.937 16:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.937 00:05:09.937 real 0m3.380s 00:05:09.937 user 0m0.027s 00:05:09.937 sys 0m0.005s 00:05:09.937 16:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.937 16:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.937 ************************************ 00:05:09.937 END TEST scheduler_create_thread 00:05:09.937 ************************************ 00:05:09.937 16:30:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.937 16:30:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2483400 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2483400 ']' 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2483400 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2483400 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2483400' 00:05:09.937 killing process with pid 2483400 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2483400 00:05:09.937 16:30:01 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2483400 00:05:10.196 [2024-10-01 16:30:01.804834] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.457 00:05:10.457 real 0m4.435s 00:05:10.457 user 0m7.627s 00:05:10.457 sys 0m0.373s 00:05:10.457 16:30:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.457 16:30:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.457 ************************************ 00:05:10.457 END TEST event_scheduler 00:05:10.457 ************************************ 00:05:10.457 16:30:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.457 16:30:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.457 16:30:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.457 16:30:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.457 16:30:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.457 ************************************ 00:05:10.457 START TEST app_repeat 00:05:10.457 ************************************ 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2484207 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2484207' 00:05:10.457 Process app_repeat pid: 2484207 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.457 spdk_app_start Round 0 00:05:10.457 16:30:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2484207 /var/tmp/spdk-nbd.sock 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2484207 ']' 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.457 16:30:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.457 [2024-10-01 16:30:02.091122] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:10.457 [2024-10-01 16:30:02.091185] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484207 ] 00:05:10.718 [2024-10-01 16:30:02.170433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.718 [2024-10-01 16:30:02.238647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.718 [2024-10-01 16:30:02.238651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.718 16:30:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.718 16:30:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.718 16:30:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.978 Malloc0 00:05:10.978 16:30:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.238 Malloc1 00:05:11.238 16:30:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.238 16:30:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.499 /dev/nbd0 00:05:11.499 16:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.499 16:30:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.499 1+0 records in 00:05:11.499 1+0 records out 00:05:11.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027894 s, 14.7 MB/s 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.499 16:30:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.499 16:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.499 16:30:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.499 16:30:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.760 /dev/nbd1 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.760 1+0 records in 00:05:11.760 1+0 records out 00:05:11.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269733 s, 15.2 MB/s 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.760 16:30:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.760 16:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.020 16:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.021 { 00:05:12.021 "nbd_device": "/dev/nbd0", 00:05:12.021 "bdev_name": "Malloc0" 00:05:12.021 }, 00:05:12.021 { 00:05:12.021 "nbd_device": "/dev/nbd1", 00:05:12.021 "bdev_name": "Malloc1" 00:05:12.021 } 00:05:12.021 ]' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.021 { 00:05:12.021 "nbd_device": "/dev/nbd0", 00:05:12.021 "bdev_name": "Malloc0" 00:05:12.021 }, 00:05:12.021 { 00:05:12.021 "nbd_device": "/dev/nbd1", 00:05:12.021 "bdev_name": "Malloc1" 00:05:12.021 } 00:05:12.021 ]' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.021 /dev/nbd1' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.021 /dev/nbd1' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.021 256+0 records in 00:05:12.021 256+0 records out 00:05:12.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012814 s, 81.8 MB/s 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.021 256+0 records in 00:05:12.021 256+0 records out 00:05:12.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170638 s, 61.5 MB/s 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.021 256+0 records in 00:05:12.021 256+0 records out 00:05:12.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186447 s, 56.2 MB/s 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.021 16:30:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.282 16:30:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.544 16:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.804 16:30:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.804 16:30:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.064 16:30:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.064 [2024-10-01 16:30:04.674534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.064 [2024-10-01 16:30:04.735308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.064 [2024-10-01 16:30:04.735313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.325 [2024-10-01 16:30:04.765343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.325 [2024-10-01 16:30:04.765379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.877 16:30:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.877 16:30:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.877 spdk_app_start Round 1 00:05:15.877 16:30:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2484207 /var/tmp/spdk-nbd.sock 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2484207 ']' 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.877 16:30:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.139 16:30:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.139 16:30:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.139 16:30:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.400 Malloc0 00:05:16.400 16:30:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.661 Malloc1 00:05:16.661 16:30:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.661 16:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.662 16:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.922 /dev/nbd0 00:05:16.922 16:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.922 16:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.922 1+0 records in 00:05:16.922 1+0 records out 00:05:16.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242446 s, 16.9 MB/s 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.922 16:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.922 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.922 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.922 16:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.182 /dev/nbd1 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.182 1+0 records in 00:05:17.182 1+0 records out 00:05:17.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263787 s, 15.5 MB/s 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.182 16:30:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.182 16:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.444 16:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.444 { 00:05:17.444 "nbd_device": "/dev/nbd0", 00:05:17.444 "bdev_name": "Malloc0" 00:05:17.444 }, 00:05:17.444 { 00:05:17.444 "nbd_device": "/dev/nbd1", 00:05:17.444 "bdev_name": "Malloc1" 00:05:17.444 } 00:05:17.444 ]' 00:05:17.444 16:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.444 { 00:05:17.444 "nbd_device": "/dev/nbd0", 00:05:17.444 "bdev_name": "Malloc0" 00:05:17.444 }, 00:05:17.444 { 00:05:17.444 "nbd_device": "/dev/nbd1", 00:05:17.444 "bdev_name": "Malloc1" 00:05:17.444 } 00:05:17.444 ]' 00:05:17.444 16:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.444 /dev/nbd1' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.444 /dev/nbd1' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.444 256+0 records in 00:05:17.444 256+0 records out 00:05:17.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118759 s, 88.3 MB/s 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.444 256+0 records in 00:05:17.444 256+0 records out 00:05:17.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147826 s, 70.9 MB/s 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.444 256+0 records in 00:05:17.444 256+0 records out 00:05:17.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188171 s, 55.7 MB/s 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.444 16:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.706 16:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.965 16:30:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.966 16:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.226 16:30:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.227 16:30:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.487 16:30:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.748 [2024-10-01 16:30:10.186155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.748 [2024-10-01 16:30:10.246902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.748 [2024-10-01 16:30:10.246906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.748 [2024-10-01 16:30:10.277826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.748 [2024-10-01 16:30:10.277863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.052 spdk_app_start Round 2 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2484207 /var/tmp/spdk-nbd.sock 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2484207 ']' 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.052 16:30:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.052 Malloc0 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.052 Malloc1 00:05:22.052 16:30:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.052 16:30:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.313 /dev/nbd0 00:05:22.313 16:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.313 16:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.313 1+0 records in 00:05:22.313 1+0 records out 00:05:22.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026793 s, 15.3 MB/s 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.313 16:30:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.313 16:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.313 16:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.313 16:30:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.573 /dev/nbd1 00:05:22.573 16:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.573 16:30:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.573 16:30:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.574 1+0 records in 00:05:22.574 1+0 records out 00:05:22.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259517 s, 15.8 MB/s 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.574 16:30:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.574 16:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.574 16:30:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.574 16:30:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.574 16:30:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.574 16:30:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.835 { 00:05:22.835 "nbd_device": "/dev/nbd0", 00:05:22.835 "bdev_name": "Malloc0" 00:05:22.835 }, 00:05:22.835 { 00:05:22.835 "nbd_device": "/dev/nbd1", 00:05:22.835 "bdev_name": "Malloc1" 00:05:22.835 } 00:05:22.835 ]' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.835 { 00:05:22.835 "nbd_device": "/dev/nbd0", 00:05:22.835 "bdev_name": "Malloc0" 00:05:22.835 }, 00:05:22.835 { 00:05:22.835 "nbd_device": "/dev/nbd1", 00:05:22.835 "bdev_name": "Malloc1" 00:05:22.835 } 00:05:22.835 ]' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.835 /dev/nbd1' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.835 /dev/nbd1' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.835 256+0 records in 00:05:22.835 256+0 records out 00:05:22.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118599 s, 88.4 MB/s 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.835 256+0 records in 00:05:22.835 256+0 records out 00:05:22.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015958 s, 65.7 MB/s 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.835 16:30:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.097 256+0 records in 00:05:23.097 256+0 records out 00:05:23.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194763 s, 53.8 MB/s 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.097 16:30:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.358 16:30:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.358 16:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.618 16:30:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.618 16:30:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.878 16:30:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.138 [2024-10-01 16:30:15.617924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.139 [2024-10-01 16:30:15.678635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.139 [2024-10-01 16:30:15.678640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.139 [2024-10-01 16:30:15.708819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.139 [2024-10-01 16:30:15.708853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.453 16:30:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2484207 /var/tmp/spdk-nbd.sock 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2484207 ']' 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.453 16:30:18 event.app_repeat -- event/event.sh@39 -- # killprocess 2484207 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2484207 ']' 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2484207 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2484207 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2484207' 00:05:27.453 killing process with pid 2484207 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2484207 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2484207 00:05:27.453 spdk_app_start is called in Round 0. 00:05:27.453 Shutdown signal received, stop current app iteration 00:05:27.453 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:05:27.453 spdk_app_start is called in Round 1. 00:05:27.453 Shutdown signal received, stop current app iteration 00:05:27.453 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:05:27.453 spdk_app_start is called in Round 2. 00:05:27.453 Shutdown signal received, stop current app iteration 00:05:27.453 Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 reinitialization... 00:05:27.453 spdk_app_start is called in Round 3. 00:05:27.453 Shutdown signal received, stop current app iteration 00:05:27.453 16:30:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.453 16:30:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.453 00:05:27.453 real 0m16.827s 00:05:27.453 user 0m37.188s 00:05:27.453 sys 0m2.515s 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.453 16:30:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.453 ************************************ 00:05:27.453 END TEST app_repeat 00:05:27.453 ************************************ 00:05:27.453 16:30:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.453 16:30:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.453 16:30:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.453 16:30:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.453 16:30:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.453 ************************************ 00:05:27.453 START TEST cpu_locks 00:05:27.453 ************************************ 00:05:27.453 16:30:18 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.453 * Looking for test storage... 00:05:27.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.453 16:30:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.453 16:30:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.454 --rc genhtml_branch_coverage=1 00:05:27.454 --rc genhtml_function_coverage=1 00:05:27.454 --rc genhtml_legend=1 00:05:27.454 --rc geninfo_all_blocks=1 00:05:27.454 --rc geninfo_unexecuted_blocks=1 00:05:27.454 00:05:27.454 ' 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.454 --rc genhtml_branch_coverage=1 00:05:27.454 --rc genhtml_function_coverage=1 00:05:27.454 --rc genhtml_legend=1 00:05:27.454 --rc geninfo_all_blocks=1 00:05:27.454 --rc geninfo_unexecuted_blocks=1 00:05:27.454 00:05:27.454 ' 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.454 --rc genhtml_branch_coverage=1 00:05:27.454 --rc genhtml_function_coverage=1 00:05:27.454 --rc genhtml_legend=1 00:05:27.454 --rc geninfo_all_blocks=1 00:05:27.454 --rc geninfo_unexecuted_blocks=1 00:05:27.454 00:05:27.454 ' 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.454 --rc genhtml_branch_coverage=1 00:05:27.454 --rc genhtml_function_coverage=1 00:05:27.454 --rc genhtml_legend=1 00:05:27.454 --rc geninfo_all_blocks=1 00:05:27.454 --rc geninfo_unexecuted_blocks=1 00:05:27.454 00:05:27.454 ' 00:05:27.454 16:30:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.454 16:30:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.454 16:30:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.454 16:30:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.454 16:30:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.754 ************************************ 00:05:27.754 START TEST default_locks 00:05:27.754 ************************************ 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2487912 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2487912 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2487912 ']' 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.754 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.754 [2024-10-01 16:30:19.194935] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:27.754 [2024-10-01 16:30:19.194988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487912 ] 00:05:27.754 [2024-10-01 16:30:19.272934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.754 [2024-10-01 16:30:19.336690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.436 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.436 16:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:28.436 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2487912 00:05:28.436 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2487912 00:05:28.436 16:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.744 lslocks: write error 00:05:28.744 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2487912 00:05:28.744 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2487912 ']' 00:05:28.744 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2487912 00:05:28.744 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:28.744 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2487912' 00:05:28.745 killing process with pid 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2487912 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2487912 ']' 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2487912) - No such process 00:05:28.745 ERROR: process (pid: 2487912) is no longer running 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.745 00:05:28.745 real 0m1.245s 00:05:28.745 user 0m1.362s 00:05:28.745 sys 0m0.373s 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.745 16:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.745 ************************************ 00:05:28.745 END TEST default_locks 00:05:28.745 ************************************ 00:05:28.745 16:30:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.745 16:30:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.745 16:30:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.745 16:30:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.005 ************************************ 00:05:29.005 START TEST default_locks_via_rpc 00:05:29.005 ************************************ 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2488097 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2488097 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2488097 ']' 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.005 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.005 [2024-10-01 16:30:20.473615] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:29.005 [2024-10-01 16:30:20.473653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488097 ] 00:05:29.005 [2024-10-01 16:30:20.543601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.005 [2024-10-01 16:30:20.605318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2488097 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2488097 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2488097 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2488097 ']' 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2488097 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.265 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488097 00:05:29.266 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.266 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.266 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488097' 00:05:29.266 killing process with pid 2488097 00:05:29.266 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2488097 00:05:29.266 16:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2488097 00:05:29.526 00:05:29.526 real 0m0.700s 00:05:29.526 user 0m0.700s 00:05:29.526 sys 0m0.317s 00:05:29.526 16:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.526 16:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.526 ************************************ 00:05:29.526 END TEST default_locks_via_rpc 00:05:29.526 ************************************ 00:05:29.526 16:30:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.526 16:30:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.526 16:30:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.526 16:30:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.786 ************************************ 00:05:29.786 START TEST non_locking_app_on_locked_coremask 00:05:29.786 ************************************ 00:05:29.786 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:29.786 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2488295 00:05:29.786 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2488295 /var/tmp/spdk.sock 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2488295 ']' 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.787 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.787 [2024-10-01 16:30:21.268629] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:29.787 [2024-10-01 16:30:21.268675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488295 ] 00:05:29.787 [2024-10-01 16:30:21.344506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.787 [2024-10-01 16:30:21.408193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2488301 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2488301 /var/tmp/spdk2.sock 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2488301 ']' 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.048 16:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.048 [2024-10-01 16:30:21.626775] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:30.048 [2024-10-01 16:30:21.626824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488301 ] 00:05:30.048 [2024-10-01 16:30:21.714900] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.048 [2024-10-01 16:30:21.714931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.309 [2024-10-01 16:30:21.841600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.880 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.880 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:30.880 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2488295 00:05:30.880 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2488295 00:05:30.880 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.448 lslocks: write error 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2488295 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2488295 ']' 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2488295 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.448 16:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488295 00:05:31.448 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.448 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.448 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488295' 00:05:31.448 killing process with pid 2488295 00:05:31.448 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2488295 00:05:31.448 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2488295 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2488301 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2488301 ']' 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2488301 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488301 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488301' 00:05:32.018 killing process with pid 2488301 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2488301 00:05:32.018 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2488301 00:05:32.279 00:05:32.279 real 0m2.517s 00:05:32.279 user 0m2.804s 00:05:32.279 sys 0m0.869s 00:05:32.279 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.279 16:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.279 ************************************ 00:05:32.279 END TEST non_locking_app_on_locked_coremask 00:05:32.279 ************************************ 00:05:32.279 16:30:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:32.279 16:30:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.279 16:30:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.279 16:30:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.279 ************************************ 00:05:32.279 START TEST locking_app_on_unlocked_coremask 00:05:32.279 ************************************ 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2488667 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2488667 /var/tmp/spdk.sock 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2488667 ']' 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.279 16:30:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.279 [2024-10-01 16:30:23.872129] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:32.279 [2024-10-01 16:30:23.872176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488667 ] 00:05:32.279 [2024-10-01 16:30:23.949031] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.279 [2024-10-01 16:30:23.949058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.539 [2024-10-01 16:30:24.009762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2488948 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2488948 /var/tmp/spdk2.sock 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2488948 ']' 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.108 16:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.108 [2024-10-01 16:30:24.763720] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:33.108 [2024-10-01 16:30:24.763770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488948 ] 00:05:33.368 [2024-10-01 16:30:24.851124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.368 [2024-10-01 16:30:24.977882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2488948 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2488948 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.309 lslocks: write error 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2488667 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2488667 ']' 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2488667 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.309 16:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488667 00:05:34.568 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.568 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.568 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488667' 00:05:34.568 killing process with pid 2488667 00:05:34.568 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2488667 00:05:34.568 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2488667 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2488948 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2488948 ']' 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2488948 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.828 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2488948 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2488948' 00:05:35.088 killing process with pid 2488948 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2488948 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2488948 00:05:35.088 00:05:35.088 real 0m2.930s 00:05:35.088 user 0m3.377s 00:05:35.088 sys 0m0.807s 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.088 16:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.088 ************************************ 00:05:35.088 END TEST locking_app_on_unlocked_coremask 00:05:35.088 ************************************ 00:05:35.348 16:30:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.348 16:30:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.348 16:30:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.348 16:30:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.348 ************************************ 00:05:35.348 START TEST locking_app_on_locked_coremask 00:05:35.348 ************************************ 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2489289 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2489289 /var/tmp/spdk.sock 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2489289 ']' 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.348 16:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.348 [2024-10-01 16:30:26.844693] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:35.348 [2024-10-01 16:30:26.844738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489289 ] 00:05:35.348 [2024-10-01 16:30:26.921364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.348 [2024-10-01 16:30:26.983623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2489478 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2489478 /var/tmp/spdk2.sock 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2489478 /var/tmp/spdk2.sock 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2489478 /var/tmp/spdk2.sock 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2489478 ']' 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.288 16:30:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.288 [2024-10-01 16:30:27.677136] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:36.288 [2024-10-01 16:30:27.677179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489478 ] 00:05:36.288 [2024-10-01 16:30:27.758301] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2489289 has claimed it. 00:05:36.288 [2024-10-01 16:30:27.758339] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2489478) - No such process 00:05:36.857 ERROR: process (pid: 2489478) is no longer running 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.857 lslocks: write error 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2489289 ']' 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489289' 00:05:36.857 killing process with pid 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2489289 00:05:36.857 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2489289 00:05:37.116 00:05:37.116 real 0m1.891s 00:05:37.116 user 0m2.153s 00:05:37.116 sys 0m0.425s 00:05:37.116 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.116 16:30:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.116 ************************************ 00:05:37.116 END TEST locking_app_on_locked_coremask 00:05:37.116 ************************************ 00:05:37.116 16:30:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.116 16:30:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.116 16:30:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.116 16:30:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.116 ************************************ 00:05:37.116 START TEST locking_overlapped_coremask 00:05:37.116 ************************************ 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2489634 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2489634 /var/tmp/spdk.sock 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2489634 ']' 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.116 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.117 16:30:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.117 16:30:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.376 [2024-10-01 16:30:28.806703] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:37.376 [2024-10-01 16:30:28.806752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489634 ] 00:05:37.376 [2024-10-01 16:30:28.882541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.377 [2024-10-01 16:30:28.945966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.377 [2024-10-01 16:30:28.946104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.377 [2024-10-01 16:30:28.946194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2489906 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2489906 /var/tmp/spdk2.sock 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2489906 /var/tmp/spdk2.sock 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2489906 /var/tmp/spdk2.sock 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2489906 ']' 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.947 16:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.208 [2024-10-01 16:30:29.661433] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:38.208 [2024-10-01 16:30:29.661481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489906 ] 00:05:38.208 [2024-10-01 16:30:29.731034] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2489634 has claimed it. 00:05:38.208 [2024-10-01 16:30:29.731063] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2489906) - No such process 00:05:38.777 ERROR: process (pid: 2489906) is no longer running 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2489634 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2489634 ']' 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2489634 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489634 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489634' 00:05:38.777 killing process with pid 2489634 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2489634 00:05:38.777 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2489634 00:05:39.037 00:05:39.037 real 0m1.864s 00:05:39.037 user 0m5.323s 00:05:39.037 sys 0m0.409s 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.037 ************************************ 00:05:39.037 END TEST locking_overlapped_coremask 00:05:39.037 ************************************ 00:05:39.037 16:30:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:39.037 16:30:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.037 16:30:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.037 16:30:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.037 ************************************ 00:05:39.037 START TEST locking_overlapped_coremask_via_rpc 00:05:39.037 ************************************ 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2489983 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2489983 /var/tmp/spdk.sock 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2489983 ']' 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.037 16:30:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:39.297 [2024-10-01 16:30:30.727292] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:39.297 [2024-10-01 16:30:30.727339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489983 ] 00:05:39.297 [2024-10-01 16:30:30.802245] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.297 [2024-10-01 16:30:30.802270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.297 [2024-10-01 16:30:30.866015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.297 [2024-10-01 16:30:30.866095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.297 [2024-10-01 16:30:30.866233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2490047 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2490047 /var/tmp/spdk2.sock 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2490047 ']' 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.557 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.557 [2024-10-01 16:30:31.100254] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:39.557 [2024-10-01 16:30:31.100304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490047 ] 00:05:39.558 [2024-10-01 16:30:31.174707] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.558 [2024-10-01 16:30:31.174731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.818 [2024-10-01 16:30:31.289029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.818 [2024-10-01 16:30:31.289143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.818 [2024-10-01 16:30:31.289144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.389 [2024-10-01 16:30:31.938035] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2489983 has claimed it. 00:05:40.389 request: 00:05:40.389 { 00:05:40.389 "method": "framework_enable_cpumask_locks", 00:05:40.389 "req_id": 1 00:05:40.389 } 00:05:40.389 Got JSON-RPC error response 00:05:40.389 response: 00:05:40.389 { 00:05:40.389 "code": -32603, 00:05:40.389 "message": "Failed to claim CPU core: 2" 00:05:40.389 } 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2489983 /var/tmp/spdk.sock 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2489983 ']' 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.389 16:30:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2490047 /var/tmp/spdk2.sock 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2490047 ']' 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.650 00:05:40.650 real 0m1.647s 00:05:40.650 user 0m0.789s 00:05:40.650 sys 0m0.131s 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.650 16:30:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.650 ************************************ 00:05:40.650 END TEST locking_overlapped_coremask_via_rpc 00:05:40.650 ************************************ 00:05:40.911 16:30:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.911 16:30:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2489983 ]] 00:05:40.911 16:30:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2489983 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2489983 ']' 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2489983 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489983 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489983' 00:05:40.911 killing process with pid 2489983 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2489983 00:05:40.911 16:30:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2489983 00:05:41.172 16:30:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2490047 ]] 00:05:41.172 16:30:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2490047 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2490047 ']' 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2490047 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2490047 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2490047' 00:05:41.172 killing process with pid 2490047 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2490047 00:05:41.172 16:30:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2490047 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2489983 ]] 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2489983 00:05:41.434 16:30:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2489983 ']' 00:05:41.434 16:30:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2489983 00:05:41.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2489983) - No such process 00:05:41.434 16:30:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2489983 is not found' 00:05:41.434 Process with pid 2489983 is not found 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2490047 ]] 00:05:41.434 16:30:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2490047 00:05:41.434 16:30:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2490047 ']' 00:05:41.435 16:30:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2490047 00:05:41.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2490047) - No such process 00:05:41.435 16:30:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2490047 is not found' 00:05:41.435 Process with pid 2490047 is not found 00:05:41.435 16:30:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.435 00:05:41.435 real 0m13.964s 00:05:41.435 user 0m25.289s 00:05:41.435 sys 0m4.228s 00:05:41.435 16:30:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.435 16:30:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.435 ************************************ 00:05:41.435 END TEST cpu_locks 00:05:41.435 ************************************ 00:05:41.435 00:05:41.435 real 0m39.610s 00:05:41.435 user 1m16.809s 00:05:41.435 sys 0m7.821s 00:05:41.435 16:30:32 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.435 16:30:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.435 ************************************ 00:05:41.435 END TEST event 00:05:41.435 ************************************ 00:05:41.435 16:30:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.435 16:30:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.435 16:30:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.435 16:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.435 ************************************ 00:05:41.435 START TEST thread 00:05:41.435 ************************************ 00:05:41.435 16:30:33 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.435 * Looking for test storage... 00:05:41.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.697 16:30:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.697 16:30:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.697 16:30:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.697 16:30:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.697 16:30:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.697 16:30:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.697 16:30:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.697 16:30:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.697 16:30:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.697 16:30:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.697 16:30:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.697 16:30:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.697 16:30:33 thread -- scripts/common.sh@345 -- # : 1 00:05:41.697 16:30:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.697 16:30:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.697 16:30:33 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.697 16:30:33 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.697 16:30:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.697 16:30:33 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.697 16:30:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.697 16:30:33 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.697 16:30:33 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.697 16:30:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.697 16:30:33 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.697 16:30:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.697 16:30:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.697 16:30:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.697 16:30:33 thread -- scripts/common.sh@368 -- # return 0 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.697 --rc genhtml_branch_coverage=1 00:05:41.697 --rc genhtml_function_coverage=1 00:05:41.697 --rc genhtml_legend=1 00:05:41.697 --rc geninfo_all_blocks=1 00:05:41.697 --rc geninfo_unexecuted_blocks=1 00:05:41.697 00:05:41.697 ' 00:05:41.697 16:30:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.697 16:30:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.697 ************************************ 00:05:41.697 START TEST thread_poller_perf 00:05:41.697 ************************************ 00:05:41.697 16:30:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.697 [2024-10-01 16:30:33.271727] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:41.697 [2024-10-01 16:30:33.271818] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490661 ] 00:05:41.697 [2024-10-01 16:30:33.364851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.957 [2024-10-01 16:30:33.436764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.957 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.897 ====================================== 00:05:42.897 busy:2614719178 (cyc) 00:05:42.897 total_run_count: 312000 00:05:42.897 tsc_hz: 2600000000 (cyc) 00:05:42.897 ====================================== 00:05:42.897 poller_cost: 8380 (cyc), 3223 (nsec) 00:05:42.897 00:05:42.897 real 0m1.246s 00:05:42.897 user 0m1.144s 00:05:42.897 sys 0m0.098s 00:05:42.897 16:30:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.897 16:30:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.897 ************************************ 00:05:42.897 END TEST thread_poller_perf 00:05:42.897 ************************************ 00:05:42.897 16:30:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.897 16:30:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:42.897 16:30:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.897 16:30:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.897 ************************************ 00:05:42.897 START TEST thread_poller_perf 00:05:42.897 ************************************ 00:05:42.897 16:30:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.156 [2024-10-01 16:30:34.593650] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:43.156 [2024-10-01 16:30:34.593756] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490745 ] 00:05:43.156 [2024-10-01 16:30:34.674523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.156 [2024-10-01 16:30:34.752701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.156 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.537 ====================================== 00:05:44.537 busy:2602395350 (cyc) 00:05:44.537 total_run_count: 4125000 00:05:44.537 tsc_hz: 2600000000 (cyc) 00:05:44.537 ====================================== 00:05:44.537 poller_cost: 630 (cyc), 242 (nsec) 00:05:44.537 00:05:44.537 real 0m1.230s 00:05:44.537 user 0m1.138s 00:05:44.537 sys 0m0.088s 00:05:44.537 16:30:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.537 16:30:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.537 ************************************ 00:05:44.537 END TEST thread_poller_perf 00:05:44.537 ************************************ 00:05:44.537 16:30:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.537 00:05:44.537 real 0m2.814s 00:05:44.537 user 0m2.445s 00:05:44.537 sys 0m0.381s 00:05:44.537 16:30:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.537 16:30:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.537 ************************************ 00:05:44.537 END TEST thread 00:05:44.537 ************************************ 00:05:44.537 16:30:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.538 16:30:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.538 16:30:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.538 16:30:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.538 16:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 ************************************ 00:05:44.538 START TEST app_cmdline 00:05:44.538 ************************************ 00:05:44.538 16:30:35 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.538 * Looking for test storage... 00:05:44.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.538 16:30:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.538 --rc genhtml_branch_coverage=1 00:05:44.538 --rc genhtml_function_coverage=1 00:05:44.538 --rc genhtml_legend=1 00:05:44.538 --rc geninfo_all_blocks=1 00:05:44.538 --rc geninfo_unexecuted_blocks=1 00:05:44.538 00:05:44.538 ' 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.538 --rc genhtml_branch_coverage=1 00:05:44.538 --rc genhtml_function_coverage=1 00:05:44.538 --rc genhtml_legend=1 00:05:44.538 --rc geninfo_all_blocks=1 00:05:44.538 --rc geninfo_unexecuted_blocks=1 00:05:44.538 00:05:44.538 ' 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.538 --rc genhtml_branch_coverage=1 00:05:44.538 --rc genhtml_function_coverage=1 00:05:44.538 --rc genhtml_legend=1 00:05:44.538 --rc geninfo_all_blocks=1 00:05:44.538 --rc geninfo_unexecuted_blocks=1 00:05:44.538 00:05:44.538 ' 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.538 --rc genhtml_branch_coverage=1 00:05:44.538 --rc genhtml_function_coverage=1 00:05:44.538 --rc genhtml_legend=1 00:05:44.538 --rc geninfo_all_blocks=1 00:05:44.538 --rc geninfo_unexecuted_blocks=1 00:05:44.538 00:05:44.538 ' 00:05:44.538 16:30:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.538 16:30:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2491095 00:05:44.538 16:30:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2491095 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2491095 ']' 00:05:44.538 16:30:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.538 16:30:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 [2024-10-01 16:30:36.163989] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:44.538 [2024-10-01 16:30:36.164049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491095 ] 00:05:44.798 [2024-10-01 16:30:36.244094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.798 [2024-10-01 16:30:36.313303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.368 16:30:36 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.368 16:30:36 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:45.368 16:30:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:45.628 { 00:05:45.628 "version": "SPDK v25.01-pre git sha1 1c027d356", 00:05:45.628 "fields": { 00:05:45.628 "major": 25, 00:05:45.628 "minor": 1, 00:05:45.628 "patch": 0, 00:05:45.628 "suffix": "-pre", 00:05:45.628 "commit": "1c027d356" 00:05:45.628 } 00:05:45.628 } 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:45.628 16:30:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.628 16:30:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.629 16:30:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.629 16:30:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.629 16:30:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:45.629 16:30:37 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.890 request: 00:05:45.890 { 00:05:45.890 "method": "env_dpdk_get_mem_stats", 00:05:45.890 "req_id": 1 00:05:45.890 } 00:05:45.890 Got JSON-RPC error response 00:05:45.890 response: 00:05:45.890 { 00:05:45.890 "code": -32601, 00:05:45.890 "message": "Method not found" 00:05:45.890 } 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.890 16:30:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2491095 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2491095 ']' 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2491095 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2491095 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2491095' 00:05:45.890 killing process with pid 2491095 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@969 -- # kill 2491095 00:05:45.890 16:30:37 app_cmdline -- common/autotest_common.sh@974 -- # wait 2491095 00:05:46.151 00:05:46.151 real 0m1.726s 00:05:46.151 user 0m2.104s 00:05:46.151 sys 0m0.429s 00:05:46.151 16:30:37 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.151 16:30:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.151 ************************************ 00:05:46.151 END TEST app_cmdline 00:05:46.151 ************************************ 00:05:46.151 16:30:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:46.151 16:30:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.151 16:30:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.151 16:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.151 ************************************ 00:05:46.151 START TEST version 00:05:46.151 ************************************ 00:05:46.151 16:30:37 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:46.151 * Looking for test storage... 00:05:46.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:46.151 16:30:37 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.151 16:30:37 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.151 16:30:37 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.412 16:30:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.412 16:30:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.412 16:30:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.412 16:30:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.412 16:30:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.412 16:30:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.412 16:30:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.412 16:30:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.412 16:30:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.412 16:30:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.412 16:30:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.412 16:30:37 version -- scripts/common.sh@344 -- # case "$op" in 00:05:46.412 16:30:37 version -- scripts/common.sh@345 -- # : 1 00:05:46.412 16:30:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.412 16:30:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.412 16:30:37 version -- scripts/common.sh@365 -- # decimal 1 00:05:46.412 16:30:37 version -- scripts/common.sh@353 -- # local d=1 00:05:46.412 16:30:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.412 16:30:37 version -- scripts/common.sh@355 -- # echo 1 00:05:46.412 16:30:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.412 16:30:37 version -- scripts/common.sh@366 -- # decimal 2 00:05:46.412 16:30:37 version -- scripts/common.sh@353 -- # local d=2 00:05:46.412 16:30:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.412 16:30:37 version -- scripts/common.sh@355 -- # echo 2 00:05:46.412 16:30:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.412 16:30:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.412 16:30:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.412 16:30:37 version -- scripts/common.sh@368 -- # return 0 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.412 --rc genhtml_branch_coverage=1 00:05:46.412 --rc genhtml_function_coverage=1 00:05:46.412 --rc genhtml_legend=1 00:05:46.412 --rc geninfo_all_blocks=1 00:05:46.412 --rc geninfo_unexecuted_blocks=1 00:05:46.412 00:05:46.412 ' 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.412 --rc genhtml_branch_coverage=1 00:05:46.412 --rc genhtml_function_coverage=1 00:05:46.412 --rc genhtml_legend=1 00:05:46.412 --rc geninfo_all_blocks=1 00:05:46.412 --rc geninfo_unexecuted_blocks=1 00:05:46.412 00:05:46.412 ' 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.412 --rc genhtml_branch_coverage=1 00:05:46.412 --rc genhtml_function_coverage=1 00:05:46.412 --rc genhtml_legend=1 00:05:46.412 --rc geninfo_all_blocks=1 00:05:46.412 --rc geninfo_unexecuted_blocks=1 00:05:46.412 00:05:46.412 ' 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.412 --rc genhtml_branch_coverage=1 00:05:46.412 --rc genhtml_function_coverage=1 00:05:46.412 --rc genhtml_legend=1 00:05:46.412 --rc geninfo_all_blocks=1 00:05:46.412 --rc geninfo_unexecuted_blocks=1 00:05:46.412 00:05:46.412 ' 00:05:46.412 16:30:37 version -- app/version.sh@17 -- # get_header_version major 00:05:46.412 16:30:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # cut -f2 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.412 16:30:37 version -- app/version.sh@17 -- # major=25 00:05:46.412 16:30:37 version -- app/version.sh@18 -- # get_header_version minor 00:05:46.412 16:30:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # cut -f2 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.412 16:30:37 version -- app/version.sh@18 -- # minor=1 00:05:46.412 16:30:37 version -- app/version.sh@19 -- # get_header_version patch 00:05:46.412 16:30:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # cut -f2 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.412 16:30:37 version -- app/version.sh@19 -- # patch=0 00:05:46.412 16:30:37 version -- app/version.sh@20 -- # get_header_version suffix 00:05:46.412 16:30:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # cut -f2 00:05:46.412 16:30:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.412 16:30:37 version -- app/version.sh@20 -- # suffix=-pre 00:05:46.412 16:30:37 version -- app/version.sh@22 -- # version=25.1 00:05:46.412 16:30:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:46.412 16:30:37 version -- app/version.sh@28 -- # version=25.1rc0 00:05:46.412 16:30:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:46.412 16:30:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:46.412 16:30:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:46.412 16:30:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:46.412 00:05:46.412 real 0m0.223s 00:05:46.412 user 0m0.135s 00:05:46.412 sys 0m0.129s 00:05:46.412 16:30:37 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.412 16:30:37 version -- common/autotest_common.sh@10 -- # set +x 00:05:46.412 ************************************ 00:05:46.412 END TEST version 00:05:46.412 ************************************ 00:05:46.412 16:30:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:46.413 16:30:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:46.413 16:30:37 -- spdk/autotest.sh@194 -- # uname -s 00:05:46.413 16:30:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:46.413 16:30:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.413 16:30:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.413 16:30:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:46.413 16:30:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:46.413 16:30:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:46.413 16:30:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.413 16:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.413 16:30:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:46.413 16:30:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:46.413 16:30:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:46.413 16:30:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:46.413 16:30:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:46.413 16:30:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:46.413 16:30:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:46.413 16:30:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:46.413 16:30:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.413 16:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.413 ************************************ 00:05:46.413 START TEST nvmf_tcp 00:05:46.413 ************************************ 00:05:46.413 16:30:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:46.674 * Looking for test storage... 00:05:46.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.674 16:30:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.674 --rc genhtml_branch_coverage=1 00:05:46.674 --rc genhtml_function_coverage=1 00:05:46.674 --rc genhtml_legend=1 00:05:46.674 --rc geninfo_all_blocks=1 00:05:46.674 --rc geninfo_unexecuted_blocks=1 00:05:46.674 00:05:46.674 ' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.674 --rc genhtml_branch_coverage=1 00:05:46.674 --rc genhtml_function_coverage=1 00:05:46.674 --rc genhtml_legend=1 00:05:46.674 --rc geninfo_all_blocks=1 00:05:46.674 --rc geninfo_unexecuted_blocks=1 00:05:46.674 00:05:46.674 ' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.674 --rc genhtml_branch_coverage=1 00:05:46.674 --rc genhtml_function_coverage=1 00:05:46.674 --rc genhtml_legend=1 00:05:46.674 --rc geninfo_all_blocks=1 00:05:46.674 --rc geninfo_unexecuted_blocks=1 00:05:46.674 00:05:46.674 ' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.674 --rc genhtml_branch_coverage=1 00:05:46.674 --rc genhtml_function_coverage=1 00:05:46.674 --rc genhtml_legend=1 00:05:46.674 --rc geninfo_all_blocks=1 00:05:46.674 --rc geninfo_unexecuted_blocks=1 00:05:46.674 00:05:46.674 ' 00:05:46.674 16:30:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:46.674 16:30:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.674 16:30:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.674 16:30:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.674 ************************************ 00:05:46.674 START TEST nvmf_target_core 00:05:46.674 ************************************ 00:05:46.674 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:46.936 * Looking for test storage... 00:05:46.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.936 --rc genhtml_branch_coverage=1 00:05:46.936 --rc genhtml_function_coverage=1 00:05:46.936 --rc genhtml_legend=1 00:05:46.936 --rc geninfo_all_blocks=1 00:05:46.936 --rc geninfo_unexecuted_blocks=1 00:05:46.936 00:05:46.936 ' 00:05:46.936 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.936 --rc genhtml_branch_coverage=1 00:05:46.936 --rc genhtml_function_coverage=1 00:05:46.936 --rc genhtml_legend=1 00:05:46.936 --rc geninfo_all_blocks=1 00:05:46.936 --rc geninfo_unexecuted_blocks=1 00:05:46.936 00:05:46.937 ' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.937 --rc genhtml_branch_coverage=1 00:05:46.937 --rc genhtml_function_coverage=1 00:05:46.937 --rc genhtml_legend=1 00:05:46.937 --rc geninfo_all_blocks=1 00:05:46.937 --rc geninfo_unexecuted_blocks=1 00:05:46.937 00:05:46.937 ' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.937 --rc genhtml_branch_coverage=1 00:05:46.937 --rc genhtml_function_coverage=1 00:05:46.937 --rc genhtml_legend=1 00:05:46.937 --rc geninfo_all_blocks=1 00:05:46.937 --rc geninfo_unexecuted_blocks=1 00:05:46.937 00:05:46.937 ' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.937 ************************************ 00:05:46.937 START TEST nvmf_abort 00:05:46.937 ************************************ 00:05:46.937 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:47.200 * Looking for test storage... 00:05:47.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.200 --rc genhtml_branch_coverage=1 00:05:47.200 --rc genhtml_function_coverage=1 00:05:47.200 --rc genhtml_legend=1 00:05:47.200 --rc geninfo_all_blocks=1 00:05:47.200 --rc geninfo_unexecuted_blocks=1 00:05:47.200 00:05:47.200 ' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.200 --rc genhtml_branch_coverage=1 00:05:47.200 --rc genhtml_function_coverage=1 00:05:47.200 --rc genhtml_legend=1 00:05:47.200 --rc geninfo_all_blocks=1 00:05:47.200 --rc geninfo_unexecuted_blocks=1 00:05:47.200 00:05:47.200 ' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.200 --rc genhtml_branch_coverage=1 00:05:47.200 --rc genhtml_function_coverage=1 00:05:47.200 --rc genhtml_legend=1 00:05:47.200 --rc geninfo_all_blocks=1 00:05:47.200 --rc geninfo_unexecuted_blocks=1 00:05:47.200 00:05:47.200 ' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.200 --rc genhtml_branch_coverage=1 00:05:47.200 --rc genhtml_function_coverage=1 00:05:47.200 --rc genhtml_legend=1 00:05:47.200 --rc geninfo_all_blocks=1 00:05:47.200 --rc geninfo_unexecuted_blocks=1 00:05:47.200 00:05:47.200 ' 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:47.200 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.201 16:30:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:53.787 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:53.787 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:53.787 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:53.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:53.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.788 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:54.048 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:54.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:54.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:05:54.309 00:05:54.309 --- 10.0.0.2 ping statistics --- 00:05:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.309 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:54.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:54.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:05:54.309 00:05:54.309 --- 10.0.0.1 ping statistics --- 00:05:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.309 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:54.309 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2495223 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2495223 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2495223 ']' 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.310 16:30:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.310 [2024-10-01 16:30:45.914885] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:05:54.310 [2024-10-01 16:30:45.914945] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.310 [2024-10-01 16:30:45.975707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.570 [2024-10-01 16:30:46.043736] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.570 [2024-10-01 16:30:46.043774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.570 [2024-10-01 16:30:46.043780] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.570 [2024-10-01 16:30:46.043785] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.570 [2024-10-01 16:30:46.043789] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.570 [2024-10-01 16:30:46.043891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.570 [2024-10-01 16:30:46.044005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.570 [2024-10-01 16:30:46.044023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 [2024-10-01 16:30:46.168158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 Malloc0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 Delay0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.570 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.570 [2024-10-01 16:30:46.249243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.831 16:30:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:54.831 [2024-10-01 16:30:46.382144] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:57.369 Initializing NVMe Controllers 00:05:57.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:57.369 controller IO queue size 128 less than required 00:05:57.369 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:57.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:57.369 Initialization complete. Launching workers. 00:05:57.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32336 00:05:57.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32397, failed to submit 62 00:05:57.369 success 32340, unsuccessful 57, failed 0 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:57.369 rmmod nvme_tcp 00:05:57.369 rmmod nvme_fabrics 00:05:57.369 rmmod nvme_keyring 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2495223 ']' 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2495223 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2495223 ']' 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2495223 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2495223 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2495223' 00:05:57.369 killing process with pid 2495223 00:05:57.369 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2495223 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2495223 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.370 16:30:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:59.278 00:05:59.278 real 0m12.233s 00:05:59.278 user 0m11.994s 00:05:59.278 sys 0m5.910s 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.278 ************************************ 00:05:59.278 END TEST nvmf_abort 00:05:59.278 ************************************ 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.278 ************************************ 00:05:59.278 START TEST nvmf_ns_hotplug_stress 00:05:59.278 ************************************ 00:05:59.278 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:59.539 * Looking for test storage... 00:05:59.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.539 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.539 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.539 16:30:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.539 --rc genhtml_branch_coverage=1 00:05:59.539 --rc genhtml_function_coverage=1 00:05:59.539 --rc genhtml_legend=1 00:05:59.539 --rc geninfo_all_blocks=1 00:05:59.539 --rc geninfo_unexecuted_blocks=1 00:05:59.539 00:05:59.539 ' 00:05:59.539 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.539 --rc genhtml_branch_coverage=1 00:05:59.540 --rc genhtml_function_coverage=1 00:05:59.540 --rc genhtml_legend=1 00:05:59.540 --rc geninfo_all_blocks=1 00:05:59.540 --rc geninfo_unexecuted_blocks=1 00:05:59.540 00:05:59.540 ' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.540 --rc genhtml_branch_coverage=1 00:05:59.540 --rc genhtml_function_coverage=1 00:05:59.540 --rc genhtml_legend=1 00:05:59.540 --rc geninfo_all_blocks=1 00:05:59.540 --rc geninfo_unexecuted_blocks=1 00:05:59.540 00:05:59.540 ' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.540 --rc genhtml_branch_coverage=1 00:05:59.540 --rc genhtml_function_coverage=1 00:05:59.540 --rc genhtml_legend=1 00:05:59.540 --rc geninfo_all_blocks=1 00:05:59.540 --rc geninfo_unexecuted_blocks=1 00:05:59.540 00:05:59.540 ' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.540 16:30:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:07.779 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:07.779 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.779 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:07.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:07.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:06:07.780 00:06:07.780 --- 10.0.0.2 ping statistics --- 00:06:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.780 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:06:07.780 00:06:07.780 --- 10.0.0.1 ping statistics --- 00:06:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.780 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2499736 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2499736 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2499736 ']' 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.780 16:30:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.780 [2024-10-01 16:30:58.574031] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:07.780 [2024-10-01 16:30:58.574101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.780 [2024-10-01 16:30:58.635496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.780 [2024-10-01 16:30:58.701935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.780 [2024-10-01 16:30:58.701967] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.780 [2024-10-01 16:30:58.701980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.780 [2024-10-01 16:30:58.701985] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.780 [2024-10-01 16:30:58.701989] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.780 [2024-10-01 16:30:58.702149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.780 [2024-10-01 16:30:58.702362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.780 [2024-10-01 16:30:58.702366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.780 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:07.781 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:08.039 [2024-10-01 16:30:59.640571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.039 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:08.298 16:30:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.558 [2024-10-01 16:31:00.070627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.558 16:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:08.817 16:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:08.817 Malloc0 00:06:09.078 16:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:09.078 Delay0 00:06:09.078 16:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.337 16:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:09.597 NULL1 00:06:09.597 16:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:09.856 16:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2500368 00:06:09.856 16:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:09.856 16:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:09.856 16:31:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.235 Read completed with error (sct=0, sc=11) 00:06:11.235 16:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.235 16:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:11.235 16:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:11.496 true 00:06:11.496 16:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:11.496 16:31:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.066 16:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.326 16:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:12.326 16:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:12.585 true 00:06:12.585 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:12.585 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.844 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.104 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:13.104 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:13.104 true 00:06:13.104 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:13.104 16:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.483 16:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.483 16:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:14.483 16:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:14.743 true 00:06:14.743 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:14.743 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.743 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.001 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:15.001 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:15.259 true 00:06:15.259 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:15.259 16:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 16:31:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.639 16:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:16.639 16:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:16.899 true 00:06:16.899 16:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:16.899 16:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.467 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.727 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:17.727 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:17.987 true 00:06:17.987 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:17.987 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.249 16:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.509 16:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:18.509 16:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:18.770 true 00:06:18.770 16:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:18.770 16:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.710 16:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.970 16:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:19.970 16:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:19.970 true 00:06:20.230 16:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:20.230 16:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.800 16:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.060 16:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:21.060 16:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:21.321 true 00:06:21.321 16:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:21.321 16:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.582 16:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.843 16:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:21.843 16:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:21.843 true 00:06:21.843 16:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:21.843 16:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.315 16:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.315 16:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:23.315 16:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:23.574 true 00:06:23.574 16:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:23.574 16:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.514 16:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.514 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:24.514 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:24.773 true 00:06:24.773 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:24.773 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.033 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.293 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:25.293 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:25.293 true 00:06:25.293 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:25.293 16:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.681 16:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.681 16:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:26.681 16:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:26.941 true 00:06:26.941 16:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:26.941 16:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.887 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.887 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:27.887 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:28.146 true 00:06:28.146 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:28.146 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.406 16:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.406 16:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:28.406 16:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:28.666 true 00:06:28.666 16:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:28.666 16:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.048 16:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.049 16:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:30.049 16:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:30.308 true 00:06:30.308 16:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:30.308 16:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.249 16:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.249 16:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:31.249 16:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:31.509 true 00:06:31.509 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:31.509 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.769 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.769 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:31.769 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:32.028 true 00:06:32.028 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:32.028 16:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.411 16:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.411 16:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:33.411 16:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:33.411 true 00:06:33.671 16:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:33.672 16:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.242 16:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.503 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:34.503 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:34.763 true 00:06:34.763 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:34.763 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.023 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.284 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:35.284 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:35.284 true 00:06:35.284 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:35.284 16:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.666 16:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.666 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:36.666 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:36.666 true 00:06:36.666 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:36.666 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.926 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.187 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:37.187 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:37.451 true 00:06:37.451 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:37.451 16:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.392 16:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.652 16:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:38.652 16:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:38.911 true 00:06:38.911 16:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:38.911 16:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.849 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.849 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:39.849 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:40.109 Initializing NVMe Controllers 00:06:40.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:40.109 Controller IO queue size 128, less than required. 00:06:40.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.109 Controller IO queue size 128, less than required. 00:06:40.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:40.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:40.109 Initialization complete. Launching workers. 00:06:40.109 ======================================================== 00:06:40.109 Latency(us) 00:06:40.109 Device Information : IOPS MiB/s Average min max 00:06:40.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2397.57 1.17 36692.11 2258.33 1048385.75 00:06:40.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19687.28 9.61 6501.70 1452.54 369572.32 00:06:40.109 ======================================================== 00:06:40.109 Total : 22084.85 10.78 9779.22 1452.54 1048385.75 00:06:40.109 00:06:40.109 true 00:06:40.109 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2500368 00:06:40.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2500368) - No such process 00:06:40.109 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2500368 00:06:40.109 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.368 16:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:40.629 null0 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.629 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:40.888 null1 00:06:40.888 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.888 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.888 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:41.149 null2 00:06:41.149 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.149 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.149 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:41.410 null3 00:06:41.410 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.410 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.410 16:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:41.670 null4 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:41.670 null5 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.670 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:41.930 null6 00:06:41.931 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.931 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.931 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:42.192 null7 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.192 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2505943 2505945 2505946 2505948 2505950 2505952 2505953 2505955 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.193 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.453 16:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.453 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.713 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.713 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.713 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.713 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.713 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.714 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.974 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.235 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.496 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.496 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.757 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.017 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.278 16:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.538 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.799 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.059 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.319 16:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.579 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.580 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.580 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.839 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.097 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.098 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.098 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.098 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.098 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.098 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.356 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.614 rmmod nvme_tcp 00:06:46.614 rmmod nvme_fabrics 00:06:46.614 rmmod nvme_keyring 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2499736 ']' 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2499736 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2499736 ']' 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2499736 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499736 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499736' 00:06:46.614 killing process with pid 2499736 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2499736 00:06:46.614 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2499736 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.873 16:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.779 00:06:48.779 real 0m49.526s 00:06:48.779 user 3m17.949s 00:06:48.779 sys 0m15.661s 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.779 ************************************ 00:06:48.779 END TEST nvmf_ns_hotplug_stress 00:06:48.779 ************************************ 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.779 16:31:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.039 ************************************ 00:06:49.039 START TEST nvmf_delete_subsystem 00:06:49.039 ************************************ 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:49.039 * Looking for test storage... 00:06:49.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.039 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.039 --rc genhtml_branch_coverage=1 00:06:49.039 --rc genhtml_function_coverage=1 00:06:49.039 --rc genhtml_legend=1 00:06:49.039 --rc geninfo_all_blocks=1 00:06:49.039 --rc geninfo_unexecuted_blocks=1 00:06:49.039 00:06:49.039 ' 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.040 --rc genhtml_branch_coverage=1 00:06:49.040 --rc genhtml_function_coverage=1 00:06:49.040 --rc genhtml_legend=1 00:06:49.040 --rc geninfo_all_blocks=1 00:06:49.040 --rc geninfo_unexecuted_blocks=1 00:06:49.040 00:06:49.040 ' 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.040 --rc genhtml_branch_coverage=1 00:06:49.040 --rc genhtml_function_coverage=1 00:06:49.040 --rc genhtml_legend=1 00:06:49.040 --rc geninfo_all_blocks=1 00:06:49.040 --rc geninfo_unexecuted_blocks=1 00:06:49.040 00:06:49.040 ' 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.040 --rc genhtml_branch_coverage=1 00:06:49.040 --rc genhtml_function_coverage=1 00:06:49.040 --rc genhtml_legend=1 00:06:49.040 --rc geninfo_all_blocks=1 00:06:49.040 --rc geninfo_unexecuted_blocks=1 00:06:49.040 00:06:49.040 ' 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.040 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.300 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.301 16:31:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:56.000 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:56.000 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:56.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:56.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.000 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.001 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.001 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.001 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:06:56.261 00:06:56.261 --- 10.0.0.2 ping statistics --- 00:06:56.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.261 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:06:56.261 00:06:56.261 --- 10.0.0.1 ping statistics --- 00:06:56.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.261 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.261 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2510755 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2510755 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2510755 ']' 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.521 16:31:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.521 [2024-10-01 16:31:47.974871] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:06:56.521 [2024-10-01 16:31:47.974911] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.521 [2024-10-01 16:31:48.051457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.521 [2024-10-01 16:31:48.113401] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.521 [2024-10-01 16:31:48.113436] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.521 [2024-10-01 16:31:48.113443] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.521 [2024-10-01 16:31:48.113449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.521 [2024-10-01 16:31:48.113455] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.521 [2024-10-01 16:31:48.113551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.521 [2024-10-01 16:31:48.113556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.463 [2024-10-01 16:31:48.843939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.463 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.464 [2024-10-01 16:31:48.860101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.464 NULL1 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.464 Delay0 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2511003 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:57.464 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:57.464 [2024-10-01 16:31:48.944887] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:59.376 16:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.376 16:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.376 16:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 [2024-10-01 16:31:51.081979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b71930 is same with the state(6) to be set 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 starting I/O failed: -6 00:06:59.638 Read completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.638 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 starting I/O failed: -6 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 [2024-10-01 16:31:51.082456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1b8c000c00 is same with the state(6) to be set 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Write completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:06:59.639 Read completed with error (sct=0, sc=8) 00:07:00.581 [2024-10-01 16:31:52.045311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72a70 is same with the state(6) to be set 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 [2024-10-01 16:31:52.084437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b71390 is same with the state(6) to be set 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 [2024-10-01 16:31:52.084768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1b8c00cfe0 is same with the state(6) to be set 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 [2024-10-01 16:31:52.084877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b71750 is same with the state(6) to be set 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Read completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.581 Write completed with error (sct=0, sc=8) 00:07:00.582 [2024-10-01 16:31:52.084986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1b8c00d7c0 is same with the state(6) to be set 00:07:00.582 Initializing NVMe Controllers 00:07:00.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:00.582 Controller IO queue size 128, less than required. 00:07:00.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:00.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:00.582 Initialization complete. Launching workers. 00:07:00.582 ======================================================== 00:07:00.582 Latency(us) 00:07:00.582 Device Information : IOPS MiB/s Average min max 00:07:00.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.56 0.08 929725.55 359.51 2001956.29 00:07:00.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.59 0.08 973998.43 233.82 2001841.41 00:07:00.582 ======================================================== 00:07:00.582 Total : 329.15 0.16 951594.48 233.82 2001956.29 00:07:00.582 00:07:00.582 [2024-10-01 16:31:52.085556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72a70 (9): Bad file descriptor 00:07:00.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:00.582 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.582 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:00.582 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2511003 00:07:00.582 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2511003 00:07:01.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2511003) - No such process 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2511003 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2511003 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2511003 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.163 [2024-10-01 16:31:52.615875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2511612 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:01.163 16:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.163 [2024-10-01 16:31:52.697075] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:01.732 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.732 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:01.732 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.991 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.991 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:01.991 16:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.559 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.559 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:02.559 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.127 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.127 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:03.127 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.696 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.696 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:03.696 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.265 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.265 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:04.265 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.265 Initializing NVMe Controllers 00:07:04.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.265 Controller IO queue size 128, less than required. 00:07:04.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:04.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:04.265 Initialization complete. Launching workers. 00:07:04.265 ======================================================== 00:07:04.265 Latency(us) 00:07:04.265 Device Information : IOPS MiB/s Average min max 00:07:04.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003350.13 1000202.96 1009980.38 00:07:04.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003203.21 1000177.63 1040818.73 00:07:04.265 ======================================================== 00:07:04.265 Total : 256.00 0.12 1003276.67 1000177.63 1040818.73 00:07:04.265 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511612 00:07:04.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2511612) - No such process 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2511612 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.525 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.525 rmmod nvme_tcp 00:07:04.525 rmmod nvme_fabrics 00:07:04.784 rmmod nvme_keyring 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2510755 ']' 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2510755 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2510755 ']' 00:07:04.784 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2510755 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2510755 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2510755' 00:07:04.785 killing process with pid 2510755 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2510755 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2510755 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.785 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.328 00:07:07.328 real 0m18.019s 00:07:07.328 user 0m30.712s 00:07:07.328 sys 0m6.541s 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.328 ************************************ 00:07:07.328 END TEST nvmf_delete_subsystem 00:07:07.328 ************************************ 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.328 ************************************ 00:07:07.328 START TEST nvmf_host_management 00:07:07.328 ************************************ 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.328 * Looking for test storage... 00:07:07.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.328 --rc genhtml_branch_coverage=1 00:07:07.328 --rc genhtml_function_coverage=1 00:07:07.328 --rc genhtml_legend=1 00:07:07.328 --rc geninfo_all_blocks=1 00:07:07.328 --rc geninfo_unexecuted_blocks=1 00:07:07.328 00:07:07.328 ' 00:07:07.328 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.328 --rc genhtml_branch_coverage=1 00:07:07.328 --rc genhtml_function_coverage=1 00:07:07.328 --rc genhtml_legend=1 00:07:07.328 --rc geninfo_all_blocks=1 00:07:07.328 --rc geninfo_unexecuted_blocks=1 00:07:07.328 00:07:07.328 ' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.329 --rc genhtml_branch_coverage=1 00:07:07.329 --rc genhtml_function_coverage=1 00:07:07.329 --rc genhtml_legend=1 00:07:07.329 --rc geninfo_all_blocks=1 00:07:07.329 --rc geninfo_unexecuted_blocks=1 00:07:07.329 00:07:07.329 ' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.329 --rc genhtml_branch_coverage=1 00:07:07.329 --rc genhtml_function_coverage=1 00:07:07.329 --rc genhtml_legend=1 00:07:07.329 --rc geninfo_all_blocks=1 00:07:07.329 --rc geninfo_unexecuted_blocks=1 00:07:07.329 00:07:07.329 ' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.329 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.463 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.463 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.464 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.464 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:07:15.464 00:07:15.464 --- 10.0.0.2 ping statistics --- 00:07:15.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.464 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:07:15.464 00:07:15.464 --- 10.0.0.1 ping statistics --- 00:07:15.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.464 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2516179 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2516179 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2516179 ']' 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.464 [2024-10-01 16:32:06.032407] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:15.464 [2024-10-01 16:32:06.032455] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.464 [2024-10-01 16:32:06.088831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.464 [2024-10-01 16:32:06.148922] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.464 [2024-10-01 16:32:06.148951] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.464 [2024-10-01 16:32:06.148958] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.464 [2024-10-01 16:32:06.148963] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.464 [2024-10-01 16:32:06.148967] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.464 [2024-10-01 16:32:06.149110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.464 [2024-10-01 16:32:06.149290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.464 [2024-10-01 16:32:06.149419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.464 [2024-10-01 16:32:06.149420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.464 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 [2024-10-01 16:32:06.293924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 Malloc0 00:07:15.465 [2024-10-01 16:32:06.352587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2516366 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2516366 /var/tmp/bdevperf.sock 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2516366 ']' 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:15.465 { 00:07:15.465 "params": { 00:07:15.465 "name": "Nvme$subsystem", 00:07:15.465 "trtype": "$TEST_TRANSPORT", 00:07:15.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.465 "adrfam": "ipv4", 00:07:15.465 "trsvcid": "$NVMF_PORT", 00:07:15.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.465 "hdgst": ${hdgst:-false}, 00:07:15.465 "ddgst": ${ddgst:-false} 00:07:15.465 }, 00:07:15.465 "method": "bdev_nvme_attach_controller" 00:07:15.465 } 00:07:15.465 EOF 00:07:15.465 )") 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:15.465 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:15.465 "params": { 00:07:15.465 "name": "Nvme0", 00:07:15.465 "trtype": "tcp", 00:07:15.465 "traddr": "10.0.0.2", 00:07:15.465 "adrfam": "ipv4", 00:07:15.465 "trsvcid": "4420", 00:07:15.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.465 "hdgst": false, 00:07:15.465 "ddgst": false 00:07:15.465 }, 00:07:15.465 "method": "bdev_nvme_attach_controller" 00:07:15.465 }' 00:07:15.465 [2024-10-01 16:32:06.465032] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:15.465 [2024-10-01 16:32:06.465081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516366 ] 00:07:15.465 [2024-10-01 16:32:06.541262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.465 [2024-10-01 16:32:06.602657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.465 Running I/O for 10 seconds... 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.725 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.989 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.989 [2024-10-01 16:32:07.431860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.431997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.989 [2024-10-01 16:32:07.432162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88e30 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.432752] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.990 [2024-10-01 16:32:07.438578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.990 [2024-10-01 16:32:07.438595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.438605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.990 [2024-10-01 16:32:07.438612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.438625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.990 [2024-10-01 16:32:07.438632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.438640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:15.990 [2024-10-01 16:32:07.438647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.438654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fd60 is same with the state(6) to be set 00:07:15.990 [2024-10-01 16:32:07.448592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fd60 (9): Bad file descriptor 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.990 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:15.990 [2024-10-01 16:32:07.458648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.458984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.458993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.990 [2024-10-01 16:32:07.459387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.990 [2024-10-01 16:32:07.459396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.991 [2024-10-01 16:32:07.459660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.991 [2024-10-01 16:32:07.459668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1098140 is same with the state(6) to be set 00:07:15.991 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:15.991 00:07:15.991 Latency(us) 00:07:15.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.991 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.991 Job: Nvme0n1 ended in about 0.67 seconds with error 00:07:15.991 Verification LBA range: start 0x0 length 0x400 00:07:15.991 Nvme0n1 : 0.67 1432.87 89.55 95.52 0.00 41042.66 5973.86 49202.41 00:07:15.991 =================================================================================================================== 00:07:15.991 Total : 1432.87 89.55 95.52 0.00 41042.66 5973.86 49202.41 00:07:15.991 [2024-10-01 16:32:07.462663] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.991 [2024-10-01 16:32:07.462685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:15.991 [2024-10-01 16:32:07.513275] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2516366 00:07:16.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2516366) - No such process 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:16.933 { 00:07:16.933 "params": { 00:07:16.933 "name": "Nvme$subsystem", 00:07:16.933 "trtype": "$TEST_TRANSPORT", 00:07:16.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.933 "adrfam": "ipv4", 00:07:16.933 "trsvcid": "$NVMF_PORT", 00:07:16.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.933 "hdgst": ${hdgst:-false}, 00:07:16.933 "ddgst": ${ddgst:-false} 00:07:16.933 }, 00:07:16.933 "method": "bdev_nvme_attach_controller" 00:07:16.933 } 00:07:16.933 EOF 00:07:16.933 )") 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:16.933 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:16.933 "params": { 00:07:16.933 "name": "Nvme0", 00:07:16.933 "trtype": "tcp", 00:07:16.933 "traddr": "10.0.0.2", 00:07:16.933 "adrfam": "ipv4", 00:07:16.933 "trsvcid": "4420", 00:07:16.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:16.933 "hdgst": false, 00:07:16.933 "ddgst": false 00:07:16.933 }, 00:07:16.933 "method": "bdev_nvme_attach_controller" 00:07:16.933 }' 00:07:16.933 [2024-10-01 16:32:08.507425] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:16.933 [2024-10-01 16:32:08.507476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2516818 ] 00:07:16.933 [2024-10-01 16:32:08.583658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.194 [2024-10-01 16:32:08.646241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.194 Running I/O for 1 seconds... 00:07:18.578 1600.00 IOPS, 100.00 MiB/s 00:07:18.578 Latency(us) 00:07:18.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.578 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.578 Verification LBA range: start 0x0 length 0x400 00:07:18.578 Nvme0n1 : 1.02 1635.09 102.19 0.00 0.00 38513.53 7208.96 31658.93 00:07:18.578 =================================================================================================================== 00:07:18.578 Total : 1635.09 102.19 0.00 0.00 38513.53 7208.96 31658.93 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.578 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.578 rmmod nvme_tcp 00:07:18.578 rmmod nvme_fabrics 00:07:18.578 rmmod nvme_keyring 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2516179 ']' 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2516179 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2516179 ']' 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2516179 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2516179 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2516179' 00:07:18.578 killing process with pid 2516179 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2516179 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2516179 00:07:18.578 [2024-10-01 16:32:10.233394] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:18.578 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:18.840 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.840 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.840 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.840 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.840 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:20.753 00:07:20.753 real 0m13.741s 00:07:20.753 user 0m21.171s 00:07:20.753 sys 0m6.314s 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.753 ************************************ 00:07:20.753 END TEST nvmf_host_management 00:07:20.753 ************************************ 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.753 ************************************ 00:07:20.753 START TEST nvmf_lvol 00:07:20.753 ************************************ 00:07:20.753 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.015 * Looking for test storage... 00:07:21.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:21.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.015 --rc genhtml_branch_coverage=1 00:07:21.015 --rc genhtml_function_coverage=1 00:07:21.015 --rc genhtml_legend=1 00:07:21.015 --rc geninfo_all_blocks=1 00:07:21.015 --rc geninfo_unexecuted_blocks=1 00:07:21.015 00:07:21.015 ' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:21.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.015 --rc genhtml_branch_coverage=1 00:07:21.015 --rc genhtml_function_coverage=1 00:07:21.015 --rc genhtml_legend=1 00:07:21.015 --rc geninfo_all_blocks=1 00:07:21.015 --rc geninfo_unexecuted_blocks=1 00:07:21.015 00:07:21.015 ' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:21.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.015 --rc genhtml_branch_coverage=1 00:07:21.015 --rc genhtml_function_coverage=1 00:07:21.015 --rc genhtml_legend=1 00:07:21.015 --rc geninfo_all_blocks=1 00:07:21.015 --rc geninfo_unexecuted_blocks=1 00:07:21.015 00:07:21.015 ' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:21.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.015 --rc genhtml_branch_coverage=1 00:07:21.015 --rc genhtml_function_coverage=1 00:07:21.015 --rc genhtml_legend=1 00:07:21.015 --rc geninfo_all_blocks=1 00:07:21.015 --rc geninfo_unexecuted_blocks=1 00:07:21.015 00:07:21.015 ' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.015 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.016 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:29.157 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:29.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:29.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:29.158 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:29.158 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:07:29.158 00:07:29.158 --- 10.0.0.2 ping statistics --- 00:07:29.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.158 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:29.158 00:07:29.158 --- 10.0.0.1 ping statistics --- 00:07:29.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.158 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2521058 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2521058 00:07:29.158 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2521058 ']' 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.159 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.159 [2024-10-01 16:32:20.027852] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:29.159 [2024-10-01 16:32:20.027920] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.159 [2024-10-01 16:32:20.115855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.159 [2024-10-01 16:32:20.204928] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.159 [2024-10-01 16:32:20.204987] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.159 [2024-10-01 16:32:20.204996] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.159 [2024-10-01 16:32:20.205002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.159 [2024-10-01 16:32:20.205008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.159 [2024-10-01 16:32:20.205098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.159 [2024-10-01 16:32:20.205237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.159 [2024-10-01 16:32:20.205240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.418 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.678 [2024-10-01 16:32:21.153261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.678 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:29.938 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:29.938 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.198 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.198 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:30.198 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.458 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=916d7b8f-25e7-469e-b370-aaa59b13bda3 00:07:30.458 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 916d7b8f-25e7-469e-b370-aaa59b13bda3 lvol 20 00:07:30.718 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ac9f95fc-9b46-4673-ba01-3941cee625e8 00:07:30.718 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.978 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac9f95fc-9b46-4673-ba01-3941cee625e8 00:07:31.239 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.239 [2024-10-01 16:32:22.906820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.501 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.501 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2521637 00:07:31.501 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:31.501 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.881 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ac9f95fc-9b46-4673-ba01-3941cee625e8 MY_SNAPSHOT 00:07:32.881 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d423bbf6-697a-4a0f-8fa8-408cd6deb11a 00:07:32.881 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ac9f95fc-9b46-4673-ba01-3941cee625e8 30 00:07:33.140 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d423bbf6-697a-4a0f-8fa8-408cd6deb11a MY_CLONE 00:07:33.399 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=73acc694-dff9-4a89-a7a2-ecc41d106264 00:07:33.400 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 73acc694-dff9-4a89-a7a2-ecc41d106264 00:07:33.967 16:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2521637 00:07:42.097 Initializing NVMe Controllers 00:07:42.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.097 Controller IO queue size 128, less than required. 00:07:42.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.097 Initialization complete. Launching workers. 00:07:42.098 ======================================================== 00:07:42.098 Latency(us) 00:07:42.098 Device Information : IOPS MiB/s Average min max 00:07:42.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16861.80 65.87 7595.25 376.85 52145.97 00:07:42.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13188.90 51.52 9708.37 947.50 37903.37 00:07:42.098 ======================================================== 00:07:42.098 Total : 30050.70 117.39 8522.67 376.85 52145.97 00:07:42.098 00:07:42.098 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.098 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac9f95fc-9b46-4673-ba01-3941cee625e8 00:07:42.359 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 916d7b8f-25e7-469e-b370-aaa59b13bda3 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.620 rmmod nvme_tcp 00:07:42.620 rmmod nvme_fabrics 00:07:42.620 rmmod nvme_keyring 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2521058 ']' 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2521058 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2521058 ']' 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2521058 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2521058 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2521058' 00:07:42.620 killing process with pid 2521058 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2521058 00:07:42.620 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2521058 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.880 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.425 00:07:45.425 real 0m24.101s 00:07:45.425 user 1m6.310s 00:07:45.425 sys 0m8.446s 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 END TEST nvmf_lvol 00:07:45.425 ************************************ 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 START TEST nvmf_lvs_grow 00:07:45.425 ************************************ 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.425 * Looking for test storage... 00:07:45.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.425 --rc genhtml_branch_coverage=1 00:07:45.425 --rc genhtml_function_coverage=1 00:07:45.425 --rc genhtml_legend=1 00:07:45.425 --rc geninfo_all_blocks=1 00:07:45.425 --rc geninfo_unexecuted_blocks=1 00:07:45.425 00:07:45.425 ' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.425 --rc genhtml_branch_coverage=1 00:07:45.425 --rc genhtml_function_coverage=1 00:07:45.425 --rc genhtml_legend=1 00:07:45.425 --rc geninfo_all_blocks=1 00:07:45.425 --rc geninfo_unexecuted_blocks=1 00:07:45.425 00:07:45.425 ' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.425 --rc genhtml_branch_coverage=1 00:07:45.425 --rc genhtml_function_coverage=1 00:07:45.425 --rc genhtml_legend=1 00:07:45.425 --rc geninfo_all_blocks=1 00:07:45.425 --rc geninfo_unexecuted_blocks=1 00:07:45.425 00:07:45.425 ' 00:07:45.425 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.425 --rc genhtml_branch_coverage=1 00:07:45.425 --rc genhtml_function_coverage=1 00:07:45.425 --rc genhtml_legend=1 00:07:45.426 --rc geninfo_all_blocks=1 00:07:45.426 --rc geninfo_unexecuted_blocks=1 00:07:45.426 00:07:45.426 ' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.426 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.570 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:53.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:53.571 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:53.571 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:53.571 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.571 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:07:53.571 00:07:53.571 --- 10.0.0.2 ping statistics --- 00:07:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.571 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:07:53.571 00:07:53.571 --- 10.0.0.1 ping statistics --- 00:07:53.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.571 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:53.571 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2527463 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2527463 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2527463 ']' 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.572 16:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.572 [2024-10-01 16:32:44.230652] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:53.572 [2024-10-01 16:32:44.230716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.572 [2024-10-01 16:32:44.315261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.572 [2024-10-01 16:32:44.406475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.572 [2024-10-01 16:32:44.406529] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.572 [2024-10-01 16:32:44.406538] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.572 [2024-10-01 16:32:44.406544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.572 [2024-10-01 16:32:44.406550] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.572 [2024-10-01 16:32:44.406574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.572 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.832 [2024-10-01 16:32:45.382994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 ************************************ 00:07:53.832 START TEST lvs_grow_clean 00:07:53.832 ************************************ 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.832 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.833 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.092 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:54.092 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:54.353 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:07:54.353 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:07:54.353 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:54.614 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:54.614 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:54.614 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd lvol 150 00:07:54.875 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b42053e-e5dc-4e64-8cc7-1e3f0f30561a 00:07:54.875 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.875 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:55.134 [2024-10-01 16:32:46.613150] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:55.134 [2024-10-01 16:32:46.613217] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:55.134 true 00:07:55.135 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:07:55.135 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:55.405 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.405 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.405 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b42053e-e5dc-4e64-8cc7-1e3f0f30561a 00:07:55.694 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.974 [2024-10-01 16:32:47.499848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.974 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2528127 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2528127 /var/tmp/bdevperf.sock 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2528127 ']' 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.234 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:56.234 [2024-10-01 16:32:47.804104] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:07:56.234 [2024-10-01 16:32:47.804172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528127 ] 00:07:56.234 [2024-10-01 16:32:47.859547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.495 [2024-10-01 16:32:47.925908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.495 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.495 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:56.495 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:56.756 Nvme0n1 00:07:56.756 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:57.018 [ 00:07:57.018 { 00:07:57.018 "name": "Nvme0n1", 00:07:57.018 "aliases": [ 00:07:57.018 "3b42053e-e5dc-4e64-8cc7-1e3f0f30561a" 00:07:57.018 ], 00:07:57.018 "product_name": "NVMe disk", 00:07:57.018 "block_size": 4096, 00:07:57.018 "num_blocks": 38912, 00:07:57.018 "uuid": "3b42053e-e5dc-4e64-8cc7-1e3f0f30561a", 00:07:57.018 "numa_id": 0, 00:07:57.018 "assigned_rate_limits": { 00:07:57.018 "rw_ios_per_sec": 0, 00:07:57.018 "rw_mbytes_per_sec": 0, 00:07:57.018 "r_mbytes_per_sec": 0, 00:07:57.018 "w_mbytes_per_sec": 0 00:07:57.018 }, 00:07:57.018 "claimed": false, 00:07:57.018 "zoned": false, 00:07:57.018 "supported_io_types": { 00:07:57.018 "read": true, 00:07:57.018 "write": true, 00:07:57.018 "unmap": true, 00:07:57.018 "flush": true, 00:07:57.018 "reset": true, 00:07:57.018 "nvme_admin": true, 00:07:57.018 "nvme_io": true, 00:07:57.018 "nvme_io_md": false, 00:07:57.018 "write_zeroes": true, 00:07:57.018 "zcopy": false, 00:07:57.018 "get_zone_info": false, 00:07:57.018 "zone_management": false, 00:07:57.018 "zone_append": false, 00:07:57.018 "compare": true, 00:07:57.018 "compare_and_write": true, 00:07:57.018 "abort": true, 00:07:57.018 "seek_hole": false, 00:07:57.018 "seek_data": false, 00:07:57.018 "copy": true, 00:07:57.018 "nvme_iov_md": false 00:07:57.018 }, 00:07:57.018 "memory_domains": [ 00:07:57.018 { 00:07:57.018 "dma_device_id": "system", 00:07:57.018 "dma_device_type": 1 00:07:57.018 } 00:07:57.018 ], 00:07:57.018 "driver_specific": { 00:07:57.018 "nvme": [ 00:07:57.018 { 00:07:57.018 "trid": { 00:07:57.018 "trtype": "TCP", 00:07:57.018 "adrfam": "IPv4", 00:07:57.018 "traddr": "10.0.0.2", 00:07:57.018 "trsvcid": "4420", 00:07:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:57.018 }, 00:07:57.018 "ctrlr_data": { 00:07:57.018 "cntlid": 1, 00:07:57.018 "vendor_id": "0x8086", 00:07:57.018 "model_number": "SPDK bdev Controller", 00:07:57.018 "serial_number": "SPDK0", 00:07:57.018 "firmware_revision": "25.01", 00:07:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.018 "oacs": { 00:07:57.018 "security": 0, 00:07:57.018 "format": 0, 00:07:57.018 "firmware": 0, 00:07:57.018 "ns_manage": 0 00:07:57.018 }, 00:07:57.018 "multi_ctrlr": true, 00:07:57.018 "ana_reporting": false 00:07:57.018 }, 00:07:57.018 "vs": { 00:07:57.018 "nvme_version": "1.3" 00:07:57.018 }, 00:07:57.018 "ns_data": { 00:07:57.018 "id": 1, 00:07:57.018 "can_share": true 00:07:57.018 } 00:07:57.018 } 00:07:57.018 ], 00:07:57.018 "mp_policy": "active_passive" 00:07:57.018 } 00:07:57.018 } 00:07:57.018 ] 00:07:57.018 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2528157 00:07:57.018 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:57.018 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:57.018 Running I/O for 10 seconds... 00:07:57.959 Latency(us) 00:07:57.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.959 Nvme0n1 : 1.00 19319.00 75.46 0.00 0.00 0.00 0.00 0.00 00:07:57.959 =================================================================================================================== 00:07:57.959 Total : 19319.00 75.46 0.00 0.00 0.00 0.00 0.00 00:07:57.959 00:07:58.898 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:07:59.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.158 Nvme0n1 : 2.00 19408.00 75.81 0.00 0.00 0.00 0.00 0.00 00:07:59.158 =================================================================================================================== 00:07:59.158 Total : 19408.00 75.81 0.00 0.00 0.00 0.00 0.00 00:07:59.158 00:07:59.158 true 00:07:59.158 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:07:59.158 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:59.417 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:59.417 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:59.417 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2528157 00:07:59.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.986 Nvme0n1 : 3.00 19481.67 76.10 0.00 0.00 0.00 0.00 0.00 00:07:59.986 =================================================================================================================== 00:07:59.986 Total : 19481.67 76.10 0.00 0.00 0.00 0.00 0.00 00:07:59.986 00:08:01.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.368 Nvme0n1 : 4.00 19534.25 76.31 0.00 0.00 0.00 0.00 0.00 00:08:01.368 =================================================================================================================== 00:08:01.368 Total : 19534.25 76.31 0.00 0.00 0.00 0.00 0.00 00:08:01.368 00:08:02.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.308 Nvme0n1 : 5.00 19578.40 76.48 0.00 0.00 0.00 0.00 0.00 00:08:02.308 =================================================================================================================== 00:08:02.308 Total : 19578.40 76.48 0.00 0.00 0.00 0.00 0.00 00:08:02.308 00:08:03.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.246 Nvme0n1 : 6.00 19588.00 76.52 0.00 0.00 0.00 0.00 0.00 00:08:03.246 =================================================================================================================== 00:08:03.246 Total : 19588.00 76.52 0.00 0.00 0.00 0.00 0.00 00:08:03.246 00:08:04.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.184 Nvme0n1 : 7.00 19611.71 76.61 0.00 0.00 0.00 0.00 0.00 00:08:04.185 =================================================================================================================== 00:08:04.185 Total : 19611.71 76.61 0.00 0.00 0.00 0.00 0.00 00:08:04.185 00:08:05.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.123 Nvme0n1 : 8.00 19637.38 76.71 0.00 0.00 0.00 0.00 0.00 00:08:05.123 =================================================================================================================== 00:08:05.123 Total : 19637.38 76.71 0.00 0.00 0.00 0.00 0.00 00:08:05.123 00:08:06.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.060 Nvme0n1 : 9.00 19650.11 76.76 0.00 0.00 0.00 0.00 0.00 00:08:06.060 =================================================================================================================== 00:08:06.060 Total : 19650.11 76.76 0.00 0.00 0.00 0.00 0.00 00:08:06.060 00:08:07.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.000 Nvme0n1 : 10.00 19666.40 76.82 0.00 0.00 0.00 0.00 0.00 00:08:07.000 =================================================================================================================== 00:08:07.000 Total : 19666.40 76.82 0.00 0.00 0.00 0.00 0.00 00:08:07.000 00:08:07.000 00:08:07.000 Latency(us) 00:08:07.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.000 Nvme0n1 : 10.00 19665.44 76.82 0.00 0.00 6505.49 3881.75 12250.19 00:08:07.000 =================================================================================================================== 00:08:07.000 Total : 19665.44 76.82 0.00 0.00 6505.49 3881.75 12250.19 00:08:07.000 { 00:08:07.000 "results": [ 00:08:07.000 { 00:08:07.000 "job": "Nvme0n1", 00:08:07.000 "core_mask": "0x2", 00:08:07.000 "workload": "randwrite", 00:08:07.000 "status": "finished", 00:08:07.000 "queue_depth": 128, 00:08:07.000 "io_size": 4096, 00:08:07.000 "runtime": 10.003795, 00:08:07.000 "iops": 19665.43696667115, 00:08:07.000 "mibps": 76.81811315105918, 00:08:07.000 "io_failed": 0, 00:08:07.000 "io_timeout": 0, 00:08:07.000 "avg_latency_us": 6505.490377712096, 00:08:07.000 "min_latency_us": 3881.7476923076924, 00:08:07.000 "max_latency_us": 12250.190769230769 00:08:07.000 } 00:08:07.000 ], 00:08:07.000 "core_count": 1 00:08:07.000 } 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2528127 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2528127 ']' 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2528127 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.000 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2528127 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2528127' 00:08:07.260 killing process with pid 2528127 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2528127 00:08:07.260 Received shutdown signal, test time was about 10.000000 seconds 00:08:07.260 00:08:07.260 Latency(us) 00:08:07.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.260 =================================================================================================================== 00:08:07.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2528127 00:08:07.260 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.519 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.778 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:07.778 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:08.038 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:08.038 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:08.038 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.038 [2024-10-01 16:32:59.679399] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:08.299 request: 00:08:08.299 { 00:08:08.299 "uuid": "bf5ba487-eb25-4e5c-96e4-ef90460fc8fd", 00:08:08.299 "method": "bdev_lvol_get_lvstores", 00:08:08.299 "req_id": 1 00:08:08.299 } 00:08:08.299 Got JSON-RPC error response 00:08:08.299 response: 00:08:08.299 { 00:08:08.299 "code": -19, 00:08:08.299 "message": "No such device" 00:08:08.299 } 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.299 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.559 aio_bdev 00:08:08.559 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b42053e-e5dc-4e64-8cc7-1e3f0f30561a 00:08:08.559 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3b42053e-e5dc-4e64-8cc7-1e3f0f30561a 00:08:08.559 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.559 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:08.559 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.560 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.560 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.820 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b42053e-e5dc-4e64-8cc7-1e3f0f30561a -t 2000 00:08:09.080 [ 00:08:09.080 { 00:08:09.080 "name": "3b42053e-e5dc-4e64-8cc7-1e3f0f30561a", 00:08:09.080 "aliases": [ 00:08:09.080 "lvs/lvol" 00:08:09.080 ], 00:08:09.080 "product_name": "Logical Volume", 00:08:09.080 "block_size": 4096, 00:08:09.080 "num_blocks": 38912, 00:08:09.080 "uuid": "3b42053e-e5dc-4e64-8cc7-1e3f0f30561a", 00:08:09.080 "assigned_rate_limits": { 00:08:09.080 "rw_ios_per_sec": 0, 00:08:09.080 "rw_mbytes_per_sec": 0, 00:08:09.080 "r_mbytes_per_sec": 0, 00:08:09.080 "w_mbytes_per_sec": 0 00:08:09.080 }, 00:08:09.080 "claimed": false, 00:08:09.080 "zoned": false, 00:08:09.080 "supported_io_types": { 00:08:09.080 "read": true, 00:08:09.080 "write": true, 00:08:09.080 "unmap": true, 00:08:09.080 "flush": false, 00:08:09.080 "reset": true, 00:08:09.080 "nvme_admin": false, 00:08:09.080 "nvme_io": false, 00:08:09.080 "nvme_io_md": false, 00:08:09.080 "write_zeroes": true, 00:08:09.080 "zcopy": false, 00:08:09.080 "get_zone_info": false, 00:08:09.080 "zone_management": false, 00:08:09.080 "zone_append": false, 00:08:09.080 "compare": false, 00:08:09.080 "compare_and_write": false, 00:08:09.080 "abort": false, 00:08:09.080 "seek_hole": true, 00:08:09.080 "seek_data": true, 00:08:09.080 "copy": false, 00:08:09.080 "nvme_iov_md": false 00:08:09.080 }, 00:08:09.080 "driver_specific": { 00:08:09.080 "lvol": { 00:08:09.080 "lvol_store_uuid": "bf5ba487-eb25-4e5c-96e4-ef90460fc8fd", 00:08:09.080 "base_bdev": "aio_bdev", 00:08:09.080 "thin_provision": false, 00:08:09.080 "num_allocated_clusters": 38, 00:08:09.080 "snapshot": false, 00:08:09.080 "clone": false, 00:08:09.080 "esnap_clone": false 00:08:09.080 } 00:08:09.080 } 00:08:09.080 } 00:08:09.080 ] 00:08:09.080 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:09.080 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:09.080 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.340 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.340 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.340 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:09.600 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:09.600 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b42053e-e5dc-4e64-8cc7-1e3f0f30561a 00:08:09.600 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf5ba487-eb25-4e5c-96e4-ef90460fc8fd 00:08:09.860 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.121 00:08:10.121 real 0m16.263s 00:08:10.121 user 0m15.837s 00:08:10.121 sys 0m1.524s 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.121 ************************************ 00:08:10.121 END TEST lvs_grow_clean 00:08:10.121 ************************************ 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.121 ************************************ 00:08:10.121 START TEST lvs_grow_dirty 00:08:10.121 ************************************ 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.121 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.382 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:10.382 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.642 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:10.642 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:10.642 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.902 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.902 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.902 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 lvol 150 00:08:11.161 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:11.162 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.162 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:11.421 [2024-10-01 16:33:02.871169] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:11.421 [2024-10-01 16:33:02.871218] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:11.421 true 00:08:11.421 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:11.421 16:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:11.421 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:11.421 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.681 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:11.941 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.201 [2024-10-01 16:33:03.693565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.201 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2531069 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2531069 /var/tmp/bdevperf.sock 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2531069 ']' 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.461 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.461 [2024-10-01 16:33:03.965277] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:12.461 [2024-10-01 16:33:03.965325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531069 ] 00:08:12.461 [2024-10-01 16:33:04.016225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.461 [2024-10-01 16:33:04.070314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.719 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.719 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:12.719 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:12.978 Nvme0n1 00:08:12.978 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.239 [ 00:08:13.239 { 00:08:13.239 "name": "Nvme0n1", 00:08:13.239 "aliases": [ 00:08:13.239 "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4" 00:08:13.239 ], 00:08:13.239 "product_name": "NVMe disk", 00:08:13.239 "block_size": 4096, 00:08:13.239 "num_blocks": 38912, 00:08:13.239 "uuid": "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4", 00:08:13.239 "numa_id": 0, 00:08:13.239 "assigned_rate_limits": { 00:08:13.239 "rw_ios_per_sec": 0, 00:08:13.239 "rw_mbytes_per_sec": 0, 00:08:13.239 "r_mbytes_per_sec": 0, 00:08:13.239 "w_mbytes_per_sec": 0 00:08:13.239 }, 00:08:13.239 "claimed": false, 00:08:13.239 "zoned": false, 00:08:13.239 "supported_io_types": { 00:08:13.239 "read": true, 00:08:13.239 "write": true, 00:08:13.239 "unmap": true, 00:08:13.239 "flush": true, 00:08:13.239 "reset": true, 00:08:13.239 "nvme_admin": true, 00:08:13.239 "nvme_io": true, 00:08:13.239 "nvme_io_md": false, 00:08:13.239 "write_zeroes": true, 00:08:13.239 "zcopy": false, 00:08:13.239 "get_zone_info": false, 00:08:13.239 "zone_management": false, 00:08:13.239 "zone_append": false, 00:08:13.239 "compare": true, 00:08:13.239 "compare_and_write": true, 00:08:13.239 "abort": true, 00:08:13.239 "seek_hole": false, 00:08:13.239 "seek_data": false, 00:08:13.239 "copy": true, 00:08:13.239 "nvme_iov_md": false 00:08:13.239 }, 00:08:13.239 "memory_domains": [ 00:08:13.239 { 00:08:13.239 "dma_device_id": "system", 00:08:13.239 "dma_device_type": 1 00:08:13.239 } 00:08:13.239 ], 00:08:13.239 "driver_specific": { 00:08:13.239 "nvme": [ 00:08:13.239 { 00:08:13.239 "trid": { 00:08:13.239 "trtype": "TCP", 00:08:13.239 "adrfam": "IPv4", 00:08:13.239 "traddr": "10.0.0.2", 00:08:13.239 "trsvcid": "4420", 00:08:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.239 }, 00:08:13.239 "ctrlr_data": { 00:08:13.239 "cntlid": 1, 00:08:13.239 "vendor_id": "0x8086", 00:08:13.239 "model_number": "SPDK bdev Controller", 00:08:13.239 "serial_number": "SPDK0", 00:08:13.239 "firmware_revision": "25.01", 00:08:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.239 "oacs": { 00:08:13.239 "security": 0, 00:08:13.239 "format": 0, 00:08:13.239 "firmware": 0, 00:08:13.239 "ns_manage": 0 00:08:13.239 }, 00:08:13.239 "multi_ctrlr": true, 00:08:13.239 "ana_reporting": false 00:08:13.239 }, 00:08:13.239 "vs": { 00:08:13.239 "nvme_version": "1.3" 00:08:13.239 }, 00:08:13.239 "ns_data": { 00:08:13.239 "id": 1, 00:08:13.239 "can_share": true 00:08:13.239 } 00:08:13.239 } 00:08:13.239 ], 00:08:13.239 "mp_policy": "active_passive" 00:08:13.239 } 00:08:13.239 } 00:08:13.239 ] 00:08:13.239 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2531096 00:08:13.239 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.239 16:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.239 Running I/O for 10 seconds... 00:08:14.177 Latency(us) 00:08:14.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.177 Nvme0n1 : 1.00 19385.00 75.72 0.00 0.00 0.00 0.00 0.00 00:08:14.177 =================================================================================================================== 00:08:14.177 Total : 19385.00 75.72 0.00 0.00 0.00 0.00 0.00 00:08:14.177 00:08:15.116 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:15.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.376 Nvme0n1 : 2.00 19506.00 76.20 0.00 0.00 0.00 0.00 0.00 00:08:15.376 =================================================================================================================== 00:08:15.376 Total : 19506.00 76.20 0.00 0.00 0.00 0.00 0.00 00:08:15.376 00:08:15.376 true 00:08:15.376 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:15.376 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.636 16:33:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:15.636 16:33:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:15.636 16:33:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2531096 00:08:16.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.246 Nvme0n1 : 3.00 19565.00 76.43 0.00 0.00 0.00 0.00 0.00 00:08:16.246 =================================================================================================================== 00:08:16.246 Total : 19565.00 76.43 0.00 0.00 0.00 0.00 0.00 00:08:16.246 00:08:17.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.185 Nvme0n1 : 4.00 19584.00 76.50 0.00 0.00 0.00 0.00 0.00 00:08:17.185 =================================================================================================================== 00:08:17.185 Total : 19584.00 76.50 0.00 0.00 0.00 0.00 0.00 00:08:17.185 00:08:18.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.566 Nvme0n1 : 5.00 19616.60 76.63 0.00 0.00 0.00 0.00 0.00 00:08:18.566 =================================================================================================================== 00:08:18.566 Total : 19616.60 76.63 0.00 0.00 0.00 0.00 0.00 00:08:18.566 00:08:19.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.508 Nvme0n1 : 6.00 19639.50 76.72 0.00 0.00 0.00 0.00 0.00 00:08:19.508 =================================================================================================================== 00:08:19.508 Total : 19639.50 76.72 0.00 0.00 0.00 0.00 0.00 00:08:19.508 00:08:20.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.449 Nvme0n1 : 7.00 19646.71 76.74 0.00 0.00 0.00 0.00 0.00 00:08:20.449 =================================================================================================================== 00:08:20.449 Total : 19646.71 76.74 0.00 0.00 0.00 0.00 0.00 00:08:20.449 00:08:21.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.390 Nvme0n1 : 8.00 19666.75 76.82 0.00 0.00 0.00 0.00 0.00 00:08:21.390 =================================================================================================================== 00:08:21.390 Total : 19666.75 76.82 0.00 0.00 0.00 0.00 0.00 00:08:21.390 00:08:22.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.330 Nvme0n1 : 9.00 19676.11 76.86 0.00 0.00 0.00 0.00 0.00 00:08:22.330 =================================================================================================================== 00:08:22.330 Total : 19676.11 76.86 0.00 0.00 0.00 0.00 0.00 00:08:22.330 00:08:23.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.272 Nvme0n1 : 10.00 19690.30 76.92 0.00 0.00 0.00 0.00 0.00 00:08:23.272 =================================================================================================================== 00:08:23.272 Total : 19690.30 76.92 0.00 0.00 0.00 0.00 0.00 00:08:23.272 00:08:23.272 00:08:23.272 Latency(us) 00:08:23.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.272 Nvme0n1 : 10.01 19689.75 76.91 0.00 0.00 6497.41 3100.36 11947.72 00:08:23.272 =================================================================================================================== 00:08:23.272 Total : 19689.75 76.91 0.00 0.00 6497.41 3100.36 11947.72 00:08:23.272 { 00:08:23.272 "results": [ 00:08:23.272 { 00:08:23.272 "job": "Nvme0n1", 00:08:23.272 "core_mask": "0x2", 00:08:23.272 "workload": "randwrite", 00:08:23.272 "status": "finished", 00:08:23.272 "queue_depth": 128, 00:08:23.272 "io_size": 4096, 00:08:23.272 "runtime": 10.006781, 00:08:23.272 "iops": 19689.748381622423, 00:08:23.272 "mibps": 76.91307961571259, 00:08:23.272 "io_failed": 0, 00:08:23.272 "io_timeout": 0, 00:08:23.272 "avg_latency_us": 6497.409738897003, 00:08:23.272 "min_latency_us": 3100.356923076923, 00:08:23.272 "max_latency_us": 11947.716923076923 00:08:23.272 } 00:08:23.272 ], 00:08:23.272 "core_count": 1 00:08:23.272 } 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2531069 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2531069 ']' 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2531069 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.272 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2531069 00:08:23.532 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:23.532 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:23.532 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2531069' 00:08:23.532 killing process with pid 2531069 00:08:23.532 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2531069 00:08:23.532 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.532 00:08:23.532 Latency(us) 00:08:23.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.532 =================================================================================================================== 00:08:23.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.532 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2531069 00:08:23.532 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.792 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2527463 00:08:24.053 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2527463 00:08:24.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2527463 Killed "${NVMF_APP[@]}" "$@" 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2533396 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2533396 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2533396 ']' 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.314 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.314 [2024-10-01 16:33:15.827691] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:24.314 [2024-10-01 16:33:15.827739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.314 [2024-10-01 16:33:15.908508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.314 [2024-10-01 16:33:15.970713] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.314 [2024-10-01 16:33:15.970748] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.314 [2024-10-01 16:33:15.970755] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.314 [2024-10-01 16:33:15.970762] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.314 [2024-10-01 16:33:15.970767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.314 [2024-10-01 16:33:15.970784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.255 [2024-10-01 16:33:16.871229] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:25.255 [2024-10-01 16:33:16.871317] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:25.255 [2024-10-01 16:33:16.871347] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.255 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.515 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 -t 2000 00:08:25.775 [ 00:08:25.775 { 00:08:25.775 "name": "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4", 00:08:25.775 "aliases": [ 00:08:25.775 "lvs/lvol" 00:08:25.775 ], 00:08:25.775 "product_name": "Logical Volume", 00:08:25.775 "block_size": 4096, 00:08:25.775 "num_blocks": 38912, 00:08:25.775 "uuid": "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4", 00:08:25.775 "assigned_rate_limits": { 00:08:25.775 "rw_ios_per_sec": 0, 00:08:25.775 "rw_mbytes_per_sec": 0, 00:08:25.775 "r_mbytes_per_sec": 0, 00:08:25.775 "w_mbytes_per_sec": 0 00:08:25.775 }, 00:08:25.775 "claimed": false, 00:08:25.775 "zoned": false, 00:08:25.775 "supported_io_types": { 00:08:25.775 "read": true, 00:08:25.775 "write": true, 00:08:25.775 "unmap": true, 00:08:25.775 "flush": false, 00:08:25.775 "reset": true, 00:08:25.775 "nvme_admin": false, 00:08:25.775 "nvme_io": false, 00:08:25.775 "nvme_io_md": false, 00:08:25.775 "write_zeroes": true, 00:08:25.775 "zcopy": false, 00:08:25.775 "get_zone_info": false, 00:08:25.775 "zone_management": false, 00:08:25.775 "zone_append": false, 00:08:25.775 "compare": false, 00:08:25.775 "compare_and_write": false, 00:08:25.775 "abort": false, 00:08:25.775 "seek_hole": true, 00:08:25.775 "seek_data": true, 00:08:25.775 "copy": false, 00:08:25.775 "nvme_iov_md": false 00:08:25.775 }, 00:08:25.775 "driver_specific": { 00:08:25.775 "lvol": { 00:08:25.775 "lvol_store_uuid": "d5be5e02-6e93-4a73-9f4e-29d1d073cde9", 00:08:25.775 "base_bdev": "aio_bdev", 00:08:25.775 "thin_provision": false, 00:08:25.775 "num_allocated_clusters": 38, 00:08:25.775 "snapshot": false, 00:08:25.775 "clone": false, 00:08:25.775 "esnap_clone": false 00:08:25.775 } 00:08:25.775 } 00:08:25.775 } 00:08:25.775 ] 00:08:25.775 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:25.775 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:25.775 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:26.035 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:26.035 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:26.035 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:26.296 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:26.296 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.296 [2024-10-01 16:33:17.928140] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.556 16:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:26.556 request: 00:08:26.556 { 00:08:26.556 "uuid": "d5be5e02-6e93-4a73-9f4e-29d1d073cde9", 00:08:26.556 "method": "bdev_lvol_get_lvstores", 00:08:26.556 "req_id": 1 00:08:26.556 } 00:08:26.556 Got JSON-RPC error response 00:08:26.556 response: 00:08:26.556 { 00:08:26.556 "code": -19, 00:08:26.556 "message": "No such device" 00:08:26.556 } 00:08:26.556 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:26.556 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.556 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.556 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.556 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.816 aio_bdev 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.816 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.078 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 -t 2000 00:08:27.339 [ 00:08:27.339 { 00:08:27.339 "name": "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4", 00:08:27.339 "aliases": [ 00:08:27.339 "lvs/lvol" 00:08:27.339 ], 00:08:27.339 "product_name": "Logical Volume", 00:08:27.339 "block_size": 4096, 00:08:27.339 "num_blocks": 38912, 00:08:27.339 "uuid": "4ac275de-62ef-4dd5-b5ed-24dd57df0bb4", 00:08:27.339 "assigned_rate_limits": { 00:08:27.339 "rw_ios_per_sec": 0, 00:08:27.339 "rw_mbytes_per_sec": 0, 00:08:27.339 "r_mbytes_per_sec": 0, 00:08:27.339 "w_mbytes_per_sec": 0 00:08:27.339 }, 00:08:27.339 "claimed": false, 00:08:27.339 "zoned": false, 00:08:27.339 "supported_io_types": { 00:08:27.339 "read": true, 00:08:27.339 "write": true, 00:08:27.339 "unmap": true, 00:08:27.339 "flush": false, 00:08:27.339 "reset": true, 00:08:27.339 "nvme_admin": false, 00:08:27.339 "nvme_io": false, 00:08:27.339 "nvme_io_md": false, 00:08:27.339 "write_zeroes": true, 00:08:27.339 "zcopy": false, 00:08:27.339 "get_zone_info": false, 00:08:27.339 "zone_management": false, 00:08:27.339 "zone_append": false, 00:08:27.339 "compare": false, 00:08:27.339 "compare_and_write": false, 00:08:27.339 "abort": false, 00:08:27.339 "seek_hole": true, 00:08:27.339 "seek_data": true, 00:08:27.339 "copy": false, 00:08:27.339 "nvme_iov_md": false 00:08:27.339 }, 00:08:27.339 "driver_specific": { 00:08:27.339 "lvol": { 00:08:27.339 "lvol_store_uuid": "d5be5e02-6e93-4a73-9f4e-29d1d073cde9", 00:08:27.339 "base_bdev": "aio_bdev", 00:08:27.339 "thin_provision": false, 00:08:27.339 "num_allocated_clusters": 38, 00:08:27.339 "snapshot": false, 00:08:27.339 "clone": false, 00:08:27.339 "esnap_clone": false 00:08:27.339 } 00:08:27.339 } 00:08:27.339 } 00:08:27.339 ] 00:08:27.339 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:27.339 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:27.339 16:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.599 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.599 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:27.599 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.599 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.599 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ac275de-62ef-4dd5-b5ed-24dd57df0bb4 00:08:27.859 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5be5e02-6e93-4a73-9f4e-29d1d073cde9 00:08:28.120 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.380 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.380 00:08:28.380 real 0m18.166s 00:08:28.380 user 0m47.171s 00:08:28.380 sys 0m2.908s 00:08:28.380 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.381 ************************************ 00:08:28.381 END TEST lvs_grow_dirty 00:08:28.381 ************************************ 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:28.381 16:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:28.381 nvmf_trace.0 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.381 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.381 rmmod nvme_tcp 00:08:28.381 rmmod nvme_fabrics 00:08:28.641 rmmod nvme_keyring 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2533396 ']' 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2533396 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2533396 ']' 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2533396 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2533396 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.641 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2533396' 00:08:28.641 killing process with pid 2533396 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2533396 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2533396 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.642 16:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.184 00:08:31.184 real 0m45.765s 00:08:31.184 user 1m10.055s 00:08:31.184 sys 0m10.425s 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.184 ************************************ 00:08:31.184 END TEST nvmf_lvs_grow 00:08:31.184 ************************************ 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.184 ************************************ 00:08:31.184 START TEST nvmf_bdev_io_wait 00:08:31.184 ************************************ 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:31.184 * Looking for test storage... 00:08:31.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.184 --rc genhtml_branch_coverage=1 00:08:31.184 --rc genhtml_function_coverage=1 00:08:31.184 --rc genhtml_legend=1 00:08:31.184 --rc geninfo_all_blocks=1 00:08:31.184 --rc geninfo_unexecuted_blocks=1 00:08:31.184 00:08:31.184 ' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.184 --rc genhtml_branch_coverage=1 00:08:31.184 --rc genhtml_function_coverage=1 00:08:31.184 --rc genhtml_legend=1 00:08:31.184 --rc geninfo_all_blocks=1 00:08:31.184 --rc geninfo_unexecuted_blocks=1 00:08:31.184 00:08:31.184 ' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.184 --rc genhtml_branch_coverage=1 00:08:31.184 --rc genhtml_function_coverage=1 00:08:31.184 --rc genhtml_legend=1 00:08:31.184 --rc geninfo_all_blocks=1 00:08:31.184 --rc geninfo_unexecuted_blocks=1 00:08:31.184 00:08:31.184 ' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.184 --rc genhtml_branch_coverage=1 00:08:31.184 --rc genhtml_function_coverage=1 00:08:31.184 --rc genhtml_legend=1 00:08:31.184 --rc geninfo_all_blocks=1 00:08:31.184 --rc geninfo_unexecuted_blocks=1 00:08:31.184 00:08:31.184 ' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.184 16:33:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.327 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:39.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:39.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:39.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:39.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:08:39.328 00:08:39.328 --- 10.0.0.2 ping statistics --- 00:08:39.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.328 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:08:39.328 00:08:39.328 --- 10.0.0.1 ping statistics --- 00:08:39.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.328 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2538271 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2538271 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2538271 ']' 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.328 16:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:39.328 [2024-10-01 16:33:29.935643] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.328 [2024-10-01 16:33:29.935703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.328 [2024-10-01 16:33:30.023790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.328 [2024-10-01 16:33:30.124203] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.328 [2024-10-01 16:33:30.124262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.329 [2024-10-01 16:33:30.124271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.329 [2024-10-01 16:33:30.124278] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.329 [2024-10-01 16:33:30.124284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.329 [2024-10-01 16:33:30.124409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.329 [2024-10-01 16:33:30.124547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.329 [2024-10-01 16:33:30.124678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.329 [2024-10-01 16:33:30.124681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 [2024-10-01 16:33:30.927551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 Malloc0 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.329 [2024-10-01 16:33:30.977837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2538588 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2538590 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.329 { 00:08:39.329 "params": { 00:08:39.329 "name": "Nvme$subsystem", 00:08:39.329 "trtype": "$TEST_TRANSPORT", 00:08:39.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.329 "adrfam": "ipv4", 00:08:39.329 "trsvcid": "$NVMF_PORT", 00:08:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.329 "hdgst": ${hdgst:-false}, 00:08:39.329 "ddgst": ${ddgst:-false} 00:08:39.329 }, 00:08:39.329 "method": "bdev_nvme_attach_controller" 00:08:39.329 } 00:08:39.329 EOF 00:08:39.329 )") 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2538592 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2538596 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.329 { 00:08:39.329 "params": { 00:08:39.329 "name": "Nvme$subsystem", 00:08:39.329 "trtype": "$TEST_TRANSPORT", 00:08:39.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.329 "adrfam": "ipv4", 00:08:39.329 "trsvcid": "$NVMF_PORT", 00:08:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.329 "hdgst": ${hdgst:-false}, 00:08:39.329 "ddgst": ${ddgst:-false} 00:08:39.329 }, 00:08:39.329 "method": "bdev_nvme_attach_controller" 00:08:39.329 } 00:08:39.329 EOF 00:08:39.329 )") 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.329 { 00:08:39.329 "params": { 00:08:39.329 "name": "Nvme$subsystem", 00:08:39.329 "trtype": "$TEST_TRANSPORT", 00:08:39.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.329 "adrfam": "ipv4", 00:08:39.329 "trsvcid": "$NVMF_PORT", 00:08:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.329 "hdgst": ${hdgst:-false}, 00:08:39.329 "ddgst": ${ddgst:-false} 00:08:39.329 }, 00:08:39.329 "method": "bdev_nvme_attach_controller" 00:08:39.329 } 00:08:39.329 EOF 00:08:39.329 )") 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:39.329 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:39.329 { 00:08:39.329 "params": { 00:08:39.329 "name": "Nvme$subsystem", 00:08:39.329 "trtype": "$TEST_TRANSPORT", 00:08:39.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.330 "adrfam": "ipv4", 00:08:39.330 "trsvcid": "$NVMF_PORT", 00:08:39.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.330 "hdgst": ${hdgst:-false}, 00:08:39.330 "ddgst": ${ddgst:-false} 00:08:39.330 }, 00:08:39.330 "method": "bdev_nvme_attach_controller" 00:08:39.330 } 00:08:39.330 EOF 00:08:39.330 )") 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2538588 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.330 "params": { 00:08:39.330 "name": "Nvme1", 00:08:39.330 "trtype": "tcp", 00:08:39.330 "traddr": "10.0.0.2", 00:08:39.330 "adrfam": "ipv4", 00:08:39.330 "trsvcid": "4420", 00:08:39.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.330 "hdgst": false, 00:08:39.330 "ddgst": false 00:08:39.330 }, 00:08:39.330 "method": "bdev_nvme_attach_controller" 00:08:39.330 }' 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:39.330 16:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.330 "params": { 00:08:39.330 "name": "Nvme1", 00:08:39.330 "trtype": "tcp", 00:08:39.330 "traddr": "10.0.0.2", 00:08:39.330 "adrfam": "ipv4", 00:08:39.330 "trsvcid": "4420", 00:08:39.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.330 "hdgst": false, 00:08:39.330 "ddgst": false 00:08:39.330 }, 00:08:39.330 "method": "bdev_nvme_attach_controller" 00:08:39.330 }' 00:08:39.330 16:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:39.330 16:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.330 "params": { 00:08:39.330 "name": "Nvme1", 00:08:39.330 "trtype": "tcp", 00:08:39.330 "traddr": "10.0.0.2", 00:08:39.330 "adrfam": "ipv4", 00:08:39.330 "trsvcid": "4420", 00:08:39.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.330 "hdgst": false, 00:08:39.330 "ddgst": false 00:08:39.330 }, 00:08:39.330 "method": "bdev_nvme_attach_controller" 00:08:39.330 }' 00:08:39.330 16:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:39.330 16:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:39.330 "params": { 00:08:39.330 "name": "Nvme1", 00:08:39.330 "trtype": "tcp", 00:08:39.330 "traddr": "10.0.0.2", 00:08:39.330 "adrfam": "ipv4", 00:08:39.330 "trsvcid": "4420", 00:08:39.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.330 "hdgst": false, 00:08:39.330 "ddgst": false 00:08:39.330 }, 00:08:39.330 "method": "bdev_nvme_attach_controller" 00:08:39.330 }' 00:08:39.591 [2024-10-01 16:33:31.033165] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.591 [2024-10-01 16:33:31.033211] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:39.591 [2024-10-01 16:33:31.041304] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.591 [2024-10-01 16:33:31.041372] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:39.591 [2024-10-01 16:33:31.046733] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.591 [2024-10-01 16:33:31.046790] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:39.591 [2024-10-01 16:33:31.046830] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:39.591 [2024-10-01 16:33:31.046874] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:39.591 [2024-10-01 16:33:31.161625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.591 [2024-10-01 16:33:31.189727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.591 [2024-10-01 16:33:31.209391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:39.591 [2024-10-01 16:33:31.231689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.852 [2024-10-01 16:33:31.275283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.852 [2024-10-01 16:33:31.323644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.852 [2024-10-01 16:33:31.325412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:39.852 [2024-10-01 16:33:31.370242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:40.112 Running I/O for 1 seconds... 00:08:40.112 Running I/O for 1 seconds... 00:08:40.112 Running I/O for 1 seconds... 00:08:40.112 Running I/O for 1 seconds... 00:08:41.052 10802.00 IOPS, 42.20 MiB/s 00:08:41.052 Latency(us) 00:08:41.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.052 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:41.052 Nvme1n1 : 1.01 10835.88 42.33 0.00 0.00 11748.51 5242.88 17241.01 00:08:41.052 =================================================================================================================== 00:08:41.052 Total : 10835.88 42.33 0.00 0.00 11748.51 5242.88 17241.01 00:08:41.052 16755.00 IOPS, 65.45 MiB/s 00:08:41.052 Latency(us) 00:08:41.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.052 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:41.052 Nvme1n1 : 1.01 16795.61 65.61 0.00 0.00 7598.73 3755.72 17241.01 00:08:41.052 =================================================================================================================== 00:08:41.053 Total : 16795.61 65.61 0.00 0.00 7598.73 3755.72 17241.01 00:08:41.053 10773.00 IOPS, 42.08 MiB/s 00:08:41.053 Latency(us) 00:08:41.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.053 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:41.053 Nvme1n1 : 1.01 10909.65 42.62 0.00 0.00 11708.94 2054.30 27021.00 00:08:41.053 =================================================================================================================== 00:08:41.053 Total : 10909.65 42.62 0.00 0.00 11708.94 2054.30 27021.00 00:08:41.053 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2538590 00:08:41.053 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2538592 00:08:41.313 203264.00 IOPS, 794.00 MiB/s 00:08:41.313 Latency(us) 00:08:41.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.313 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:41.313 Nvme1n1 : 1.00 202889.35 792.54 0.00 0.00 627.65 286.72 1827.45 00:08:41.313 =================================================================================================================== 00:08:41.313 Total : 202889.35 792.54 0.00 0.00 627.65 286.72 1827.45 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2538596 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.313 rmmod nvme_tcp 00:08:41.313 rmmod nvme_fabrics 00:08:41.313 rmmod nvme_keyring 00:08:41.313 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2538271 ']' 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2538271 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2538271 ']' 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2538271 00:08:41.574 16:33:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2538271 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2538271' 00:08:41.574 killing process with pid 2538271 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2538271 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2538271 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.574 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.158 00:08:44.158 real 0m12.824s 00:08:44.158 user 0m19.904s 00:08:44.158 sys 0m6.882s 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.158 ************************************ 00:08:44.158 END TEST nvmf_bdev_io_wait 00:08:44.158 ************************************ 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.158 ************************************ 00:08:44.158 START TEST nvmf_queue_depth 00:08:44.158 ************************************ 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.158 * Looking for test storage... 00:08:44.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.158 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.159 --rc genhtml_branch_coverage=1 00:08:44.159 --rc genhtml_function_coverage=1 00:08:44.159 --rc genhtml_legend=1 00:08:44.159 --rc geninfo_all_blocks=1 00:08:44.159 --rc geninfo_unexecuted_blocks=1 00:08:44.159 00:08:44.159 ' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.159 --rc genhtml_branch_coverage=1 00:08:44.159 --rc genhtml_function_coverage=1 00:08:44.159 --rc genhtml_legend=1 00:08:44.159 --rc geninfo_all_blocks=1 00:08:44.159 --rc geninfo_unexecuted_blocks=1 00:08:44.159 00:08:44.159 ' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.159 --rc genhtml_branch_coverage=1 00:08:44.159 --rc genhtml_function_coverage=1 00:08:44.159 --rc genhtml_legend=1 00:08:44.159 --rc geninfo_all_blocks=1 00:08:44.159 --rc geninfo_unexecuted_blocks=1 00:08:44.159 00:08:44.159 ' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:44.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.159 --rc genhtml_branch_coverage=1 00:08:44.159 --rc genhtml_function_coverage=1 00:08:44.159 --rc genhtml_legend=1 00:08:44.159 --rc geninfo_all_blocks=1 00:08:44.159 --rc geninfo_unexecuted_blocks=1 00:08:44.159 00:08:44.159 ' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.159 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.852 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.853 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.853 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:08:51.114 00:08:51.114 --- 10.0.0.2 ping statistics --- 00:08:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.114 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:08:51.114 00:08:51.114 --- 10.0.0.1 ping statistics --- 00:08:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.114 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:51.114 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2542855 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2542855 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2542855 ']' 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.375 16:33:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.375 [2024-10-01 16:33:42.894059] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:51.375 [2024-10-01 16:33:42.894130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.375 [2024-10-01 16:33:42.960397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.375 [2024-10-01 16:33:43.025438] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.375 [2024-10-01 16:33:43.025477] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.375 [2024-10-01 16:33:43.025483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.375 [2024-10-01 16:33:43.025489] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.375 [2024-10-01 16:33:43.025493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.375 [2024-10-01 16:33:43.025511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.635 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.635 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:51.635 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:51.635 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.635 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 [2024-10-01 16:33:43.152863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 Malloc0 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 [2024-10-01 16:33:43.198097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2542884 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2542884 /var/tmp/bdevperf.sock 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2542884 ']' 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.636 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.636 [2024-10-01 16:33:43.253361] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:08:51.636 [2024-10-01 16:33:43.253405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542884 ] 00:08:51.896 [2024-10-01 16:33:43.329428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.896 [2024-10-01 16:33:43.391316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.896 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.896 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:51.896 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:51.896 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.896 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.156 NVMe0n1 00:08:52.156 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.156 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.156 Running I/O for 10 seconds... 00:09:02.449 11264.00 IOPS, 44.00 MiB/s 11664.50 IOPS, 45.56 MiB/s 11703.00 IOPS, 45.71 MiB/s 11769.00 IOPS, 45.97 MiB/s 11803.60 IOPS, 46.11 MiB/s 11773.50 IOPS, 45.99 MiB/s 11787.29 IOPS, 46.04 MiB/s 11774.12 IOPS, 45.99 MiB/s 11768.00 IOPS, 45.97 MiB/s 11774.60 IOPS, 45.99 MiB/s 00:09:02.449 Latency(us) 00:09:02.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.449 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:02.449 Verification LBA range: start 0x0 length 0x4000 00:09:02.449 NVMe0n1 : 10.07 11793.54 46.07 0.00 0.00 86541.30 22282.24 58881.58 00:09:02.449 =================================================================================================================== 00:09:02.449 Total : 11793.54 46.07 0.00 0.00 86541.30 22282.24 58881.58 00:09:02.449 { 00:09:02.449 "results": [ 00:09:02.449 { 00:09:02.449 "job": "NVMe0n1", 00:09:02.449 "core_mask": "0x1", 00:09:02.449 "workload": "verify", 00:09:02.449 "status": "finished", 00:09:02.449 "verify_range": { 00:09:02.449 "start": 0, 00:09:02.449 "length": 16384 00:09:02.449 }, 00:09:02.449 "queue_depth": 1024, 00:09:02.449 "io_size": 4096, 00:09:02.449 "runtime": 10.070679, 00:09:02.449 "iops": 11793.544407482355, 00:09:02.449 "mibps": 46.06853284172795, 00:09:02.449 "io_failed": 0, 00:09:02.449 "io_timeout": 0, 00:09:02.449 "avg_latency_us": 86541.3008175534, 00:09:02.449 "min_latency_us": 22282.24, 00:09:02.449 "max_latency_us": 58881.57538461538 00:09:02.449 } 00:09:02.449 ], 00:09:02.449 "core_count": 1 00:09:02.449 } 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2542884 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2542884 ']' 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2542884 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2542884 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2542884' 00:09:02.449 killing process with pid 2542884 00:09:02.449 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2542884 00:09:02.450 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.450 00:09:02.450 Latency(us) 00:09:02.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.450 =================================================================================================================== 00:09:02.450 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.450 16:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2542884 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.450 rmmod nvme_tcp 00:09:02.450 rmmod nvme_fabrics 00:09:02.450 rmmod nvme_keyring 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2542855 ']' 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2542855 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2542855 ']' 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2542855 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.450 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2542855 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2542855' 00:09:02.713 killing process with pid 2542855 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2542855 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2542855 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.713 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.255 00:09:05.255 real 0m21.014s 00:09:05.255 user 0m23.843s 00:09:05.255 sys 0m6.627s 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.255 ************************************ 00:09:05.255 END TEST nvmf_queue_depth 00:09:05.255 ************************************ 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.255 ************************************ 00:09:05.255 START TEST nvmf_target_multipath 00:09:05.255 ************************************ 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:05.255 * Looking for test storage... 00:09:05.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.255 --rc genhtml_branch_coverage=1 00:09:05.255 --rc genhtml_function_coverage=1 00:09:05.255 --rc genhtml_legend=1 00:09:05.255 --rc geninfo_all_blocks=1 00:09:05.255 --rc geninfo_unexecuted_blocks=1 00:09:05.255 00:09:05.255 ' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.255 --rc genhtml_branch_coverage=1 00:09:05.255 --rc genhtml_function_coverage=1 00:09:05.255 --rc genhtml_legend=1 00:09:05.255 --rc geninfo_all_blocks=1 00:09:05.255 --rc geninfo_unexecuted_blocks=1 00:09:05.255 00:09:05.255 ' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.255 --rc genhtml_branch_coverage=1 00:09:05.255 --rc genhtml_function_coverage=1 00:09:05.255 --rc genhtml_legend=1 00:09:05.255 --rc geninfo_all_blocks=1 00:09:05.255 --rc geninfo_unexecuted_blocks=1 00:09:05.255 00:09:05.255 ' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:05.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.255 --rc genhtml_branch_coverage=1 00:09:05.255 --rc genhtml_function_coverage=1 00:09:05.255 --rc genhtml_legend=1 00:09:05.255 --rc geninfo_all_blocks=1 00:09:05.255 --rc geninfo_unexecuted_blocks=1 00:09:05.255 00:09:05.255 ' 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.255 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.256 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:11.862 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.862 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:11.863 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:11.863 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:11.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:11.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.863 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:09:12.124 00:09:12.124 --- 10.0.0.2 ping statistics --- 00:09:12.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.124 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:09:12.124 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:09:12.386 00:09:12.386 --- 10.0.0.1 ping statistics --- 00:09:12.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.386 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:12.386 only one NIC for nvmf test 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.386 rmmod nvme_tcp 00:09:12.386 rmmod nvme_fabrics 00:09:12.386 rmmod nvme_keyring 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.386 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:14.933 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.934 00:09:14.934 real 0m9.611s 00:09:14.934 user 0m2.115s 00:09:14.934 sys 0m5.431s 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.934 ************************************ 00:09:14.934 END TEST nvmf_target_multipath 00:09:14.934 ************************************ 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.934 ************************************ 00:09:14.934 START TEST nvmf_zcopy 00:09:14.934 ************************************ 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:14.934 * Looking for test storage... 00:09:14.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.934 --rc genhtml_branch_coverage=1 00:09:14.934 --rc genhtml_function_coverage=1 00:09:14.934 --rc genhtml_legend=1 00:09:14.934 --rc geninfo_all_blocks=1 00:09:14.934 --rc geninfo_unexecuted_blocks=1 00:09:14.934 00:09:14.934 ' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.934 --rc genhtml_branch_coverage=1 00:09:14.934 --rc genhtml_function_coverage=1 00:09:14.934 --rc genhtml_legend=1 00:09:14.934 --rc geninfo_all_blocks=1 00:09:14.934 --rc geninfo_unexecuted_blocks=1 00:09:14.934 00:09:14.934 ' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.934 --rc genhtml_branch_coverage=1 00:09:14.934 --rc genhtml_function_coverage=1 00:09:14.934 --rc genhtml_legend=1 00:09:14.934 --rc geninfo_all_blocks=1 00:09:14.934 --rc geninfo_unexecuted_blocks=1 00:09:14.934 00:09:14.934 ' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:14.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.934 --rc genhtml_branch_coverage=1 00:09:14.934 --rc genhtml_function_coverage=1 00:09:14.934 --rc genhtml_legend=1 00:09:14.934 --rc geninfo_all_blocks=1 00:09:14.934 --rc geninfo_unexecuted_blocks=1 00:09:14.934 00:09:14.934 ' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.934 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.935 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:23.073 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:23.073 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.073 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:23.074 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:23.074 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:09:23.074 00:09:23.074 --- 10.0.0.2 ping statistics --- 00:09:23.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.074 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:09:23.074 00:09:23.074 --- 10.0.0.1 ping statistics --- 00:09:23.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.074 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2552721 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2552721 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2552721 ']' 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.074 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.074 [2024-10-01 16:34:13.819234] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:23.074 [2024-10-01 16:34:13.819299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.074 [2024-10-01 16:34:13.882457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.074 [2024-10-01 16:34:13.948512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.074 [2024-10-01 16:34:13.948550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.074 [2024-10-01 16:34:13.948556] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.074 [2024-10-01 16:34:13.948561] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.074 [2024-10-01 16:34:13.948565] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.074 [2024-10-01 16:34:13.948583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 [2024-10-01 16:34:14.068876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 [2024-10-01 16:34:14.085051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 malloc0 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:23.075 { 00:09:23.075 "params": { 00:09:23.075 "name": "Nvme$subsystem", 00:09:23.075 "trtype": "$TEST_TRANSPORT", 00:09:23.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.075 "adrfam": "ipv4", 00:09:23.075 "trsvcid": "$NVMF_PORT", 00:09:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.075 "hdgst": ${hdgst:-false}, 00:09:23.075 "ddgst": ${ddgst:-false} 00:09:23.075 }, 00:09:23.075 "method": "bdev_nvme_attach_controller" 00:09:23.075 } 00:09:23.075 EOF 00:09:23.075 )") 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:23.075 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:23.075 "params": { 00:09:23.075 "name": "Nvme1", 00:09:23.075 "trtype": "tcp", 00:09:23.075 "traddr": "10.0.0.2", 00:09:23.075 "adrfam": "ipv4", 00:09:23.075 "trsvcid": "4420", 00:09:23.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.075 "hdgst": false, 00:09:23.075 "ddgst": false 00:09:23.075 }, 00:09:23.075 "method": "bdev_nvme_attach_controller" 00:09:23.075 }' 00:09:23.075 [2024-10-01 16:34:14.171234] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:23.075 [2024-10-01 16:34:14.171280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552861 ] 00:09:23.075 [2024-10-01 16:34:14.247504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.075 [2024-10-01 16:34:14.308923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.075 Running I/O for 10 seconds... 00:09:32.912 9318.00 IOPS, 72.80 MiB/s 9379.00 IOPS, 73.27 MiB/s 9409.00 IOPS, 73.51 MiB/s 9420.25 IOPS, 73.60 MiB/s 9424.80 IOPS, 73.63 MiB/s 9427.33 IOPS, 73.65 MiB/s 9429.29 IOPS, 73.67 MiB/s 9432.75 IOPS, 73.69 MiB/s 9438.11 IOPS, 73.74 MiB/s 9440.10 IOPS, 73.75 MiB/s 00:09:32.912 Latency(us) 00:09:32.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.912 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:32.912 Verification LBA range: start 0x0 length 0x1000 00:09:32.913 Nvme1n1 : 10.01 9441.24 73.76 0.00 0.00 13509.54 1928.27 23290.49 00:09:32.913 =================================================================================================================== 00:09:32.913 Total : 9441.24 73.76 0.00 0.00 13509.54 1928.27 23290.49 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2554577 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:33.174 { 00:09:33.174 "params": { 00:09:33.174 "name": "Nvme$subsystem", 00:09:33.174 "trtype": "$TEST_TRANSPORT", 00:09:33.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.174 "adrfam": "ipv4", 00:09:33.174 "trsvcid": "$NVMF_PORT", 00:09:33.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.174 "hdgst": ${hdgst:-false}, 00:09:33.174 "ddgst": ${ddgst:-false} 00:09:33.174 }, 00:09:33.174 "method": "bdev_nvme_attach_controller" 00:09:33.174 } 00:09:33.174 EOF 00:09:33.174 )") 00:09:33.174 [2024-10-01 16:34:24.697601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.697629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:33.174 [2024-10-01 16:34:24.705580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.705589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:33.174 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:33.174 "params": { 00:09:33.174 "name": "Nvme1", 00:09:33.174 "trtype": "tcp", 00:09:33.174 "traddr": "10.0.0.2", 00:09:33.174 "adrfam": "ipv4", 00:09:33.174 "trsvcid": "4420", 00:09:33.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.174 "hdgst": false, 00:09:33.174 "ddgst": false 00:09:33.174 }, 00:09:33.174 "method": "bdev_nvme_attach_controller" 00:09:33.174 }' 00:09:33.174 [2024-10-01 16:34:24.713598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.713607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.721619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.721627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.729640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.729648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.737659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.737667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.740548] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:33.174 [2024-10-01 16:34:24.740592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554577 ] 00:09:33.174 [2024-10-01 16:34:24.745680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.745688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.753700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.753707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.761720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.761728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.769742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.769749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.777761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.777768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.785781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.785788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.793801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.793808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.801821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.801829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.809841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.809849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.814691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.174 [2024-10-01 16:34:24.817863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.817870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.825883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.825892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.833903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.833912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.841924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.841933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.174 [2024-10-01 16:34:24.849945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.174 [2024-10-01 16:34:24.849957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.857964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.857976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.865989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.865996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.874008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.874020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.875695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.435 [2024-10-01 16:34:24.882029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.882039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.890055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.890068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.898073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.898084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.906091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.906100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.914107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.914115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.922129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.922138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.930150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.930157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.938169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.938176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.946203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.946219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.954227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.954236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.962233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.962242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.970256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.970265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.978275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.978284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.435 [2024-10-01 16:34:24.986297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.435 [2024-10-01 16:34:24.986305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:24.994316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:24.994323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.002338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.002344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.010357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.010364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.018379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.018386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.026402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.026412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.034422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.034430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.042442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.042450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.050462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.050469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.058489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.058496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.066507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.066514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.074527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.074536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.082548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.082554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.090569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.090576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.098627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.098635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.106649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.106656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.436 [2024-10-01 16:34:25.114669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.436 [2024-10-01 16:34:25.114676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.122695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.122707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.130715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.130726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 Running I/O for 5 seconds... 00:09:33.696 [2024-10-01 16:34:25.138732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.138739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.150758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.150775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.158791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.158806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.167315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.167332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.175958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.175977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.184068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.696 [2024-10-01 16:34:25.184084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.696 [2024-10-01 16:34:25.193018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.193033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.202256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.202271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.210913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.210928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.219899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.219914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.228965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.228984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.237868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.237883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.246298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.246314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.255196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.255211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.264226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.264241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.273068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.273084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.282128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.282143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.290427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.290441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.299164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.299179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.308035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.308049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.316547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.316562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.325826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.325841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.334893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.334908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.343768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.343790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.352372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.352387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.360858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.360873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.369609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.369623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.697 [2024-10-01 16:34:25.378501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.697 [2024-10-01 16:34:25.378516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.387010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.387025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.395784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.395798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.404279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.404294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.412249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.412263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.421187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.421202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.430046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.430060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.438437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.438451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.447305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.447321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.456369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.456384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.464810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.464825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.473331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.473346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.482570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.482585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.491880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.491894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.499933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.499948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.508988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.509008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.517859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.517874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.526936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.526951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.535396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.535410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.544150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.544164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.552188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.552203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.560729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.560744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.569630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.569645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.578679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.578693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.587740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.587755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.596747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.596762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.605628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.605643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.613628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.613643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.622549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.622564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.631221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.631236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.958 [2024-10-01 16:34:25.639190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.958 [2024-10-01 16:34:25.639204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.648576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.648591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.657198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.657212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.665915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.665931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.675156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.675175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.683974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.683989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.693199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.693214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.702303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.702319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.710340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.710355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.719395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.719410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.728470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.728485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.737116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.737131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.746298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.746312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.754953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.754973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.764145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.764159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.773289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.773304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.781491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.781506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.790393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.790408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.799741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.799756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.808507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.808521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.817136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.817151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.825565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.825580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.834710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.834725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.843300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.843318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.852178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.852193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.860895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.860910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.869801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.869816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.878133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.878147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.886695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.886710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.219 [2024-10-01 16:34:25.895717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.219 [2024-10-01 16:34:25.895732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.904506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.904521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.913109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.913124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.922233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.922248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.930907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.930922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.940125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.940140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.949096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.949111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.958489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.958504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.966652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.966668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.975396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.975411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.984636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.984650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:25.992787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:25.992801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.001424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.001438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.010308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.010323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.019712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.019727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.027890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.027905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.036918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.036933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.045680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.045694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.054597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.054612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.063466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.063480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.072022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.072037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.080707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.080723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.089356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.089370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.097606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.097621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.106001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.106016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.115185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.115200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.124360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.124375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 [2024-10-01 16:34:26.132898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.132913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.480 18546.00 IOPS, 144.89 MiB/s [2024-10-01 16:34:26.141910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.480 [2024-10-01 16:34:26.141925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.481 [2024-10-01 16:34:26.150598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.481 [2024-10-01 16:34:26.150613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.481 [2024-10-01 16:34:26.159710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.481 [2024-10-01 16:34:26.159725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.168354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.168369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.176697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.176712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.185675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.185690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.194449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.194465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.203296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.203311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.212252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.212267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.221173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.221188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.229911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.229927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.238152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.238167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.247192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.247207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.255936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.255951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.264747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.264762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.273642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.273656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.282720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.282735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.291914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.291929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.301069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.301084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.310076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.310092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.319307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.319322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.327923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.327939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.336698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.336713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.345412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.345427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.354023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.354038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.363127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.363142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.371720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.371735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.380699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.380714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.389387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.389403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.398277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.398293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.406433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.406448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.741 [2024-10-01 16:34:26.415626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.741 [2024-10-01 16:34:26.415642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.002 [2024-10-01 16:34:26.424497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.002 [2024-10-01 16:34:26.424512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.002 [2024-10-01 16:34:26.432975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.002 [2024-10-01 16:34:26.432991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.002 [2024-10-01 16:34:26.441582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.002 [2024-10-01 16:34:26.441597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.002 [2024-10-01 16:34:26.450134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.002 [2024-10-01 16:34:26.450150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.458691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.458706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.467382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.467397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.476421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.476436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.485774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.485790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.494877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.494892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.503818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.503837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.512573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.512588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.521872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.521887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.530113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.530129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.539506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.539522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.548238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.548254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.557066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.557082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.566186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.566202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.575344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.575360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.584514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.584530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.593082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.593097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.602238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.602253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.610790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.610805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.619796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.619811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.628388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.628404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.637590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.637605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.646623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.646638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.655450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.655465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.663981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.663996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.672532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.672551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.003 [2024-10-01 16:34:26.681489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.003 [2024-10-01 16:34:26.681505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.690670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.690686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.699398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.699413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.707609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.707624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.716547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.716562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.725826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.725842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.733759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.733774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.742501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.742516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.751637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.751652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.760753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.760767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.769619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.769634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.777982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.777997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.786571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.786586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.795147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.795161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.804161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.804175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.812967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.812987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.821738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.821753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.831249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.831265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.840542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.840561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.849329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.849344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.858265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.858280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.866934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.866948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.876398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.876413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.884326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.884340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.893415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.893430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.902028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.902042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.910127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.910142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.918585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.918599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.927856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.927870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.936869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.936884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.265 [2024-10-01 16:34:26.945824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.265 [2024-10-01 16:34:26.945838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.954092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.954107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.963083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.963098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.971870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.971885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.980854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.980868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.990210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.990224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:26.998602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:26.998617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.007151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.007169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.016229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.016244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.025387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.025402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.034244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.034258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.043404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.043419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.052428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.052442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.061287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.061302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.070422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.070436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.079323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.079338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.088512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.088527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.096635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.096651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.105613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.105628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.114040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.114054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.123065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.123080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.132146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.132160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.140982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.140996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 18596.00 IOPS, 145.28 MiB/s [2024-10-01 16:34:27.149022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.149038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.157744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.157758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.166387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.166401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.175126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.175140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.184401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.184415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.193206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.193221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.527 [2024-10-01 16:34:27.201984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.527 [2024-10-01 16:34:27.201999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.210561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.210576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.219050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.219064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.228159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.228174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.236088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.236102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.244949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.244965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.788 [2024-10-01 16:34:27.253179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.788 [2024-10-01 16:34:27.253193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.262254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.262269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.271213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.271228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.280352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.280366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.289082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.289096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.298318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.298332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.307106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.307120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.315339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.315354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.324341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.324356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.333315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.333329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.342015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.342030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.350790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.350806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.359544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.359558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.368210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.368225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.377015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.377030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.386106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.386121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.394193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.394207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.403122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.403136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.412337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.412352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.421709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.421723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.430306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.430320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.438950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.438964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.447792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.447807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.457237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.457251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.789 [2024-10-01 16:34:27.465951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.789 [2024-10-01 16:34:27.465966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.475231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.475245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.484584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.484599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.493503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.493519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.502650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.502668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.511760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.511775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.520595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.520609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.529701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.529716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.538991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.539006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.547076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.547091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.556133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.556148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.564620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.564634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.573564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.573579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.582396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.582411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.591450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.591465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.600195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.600211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.608993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.609007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.617538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.617553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.626395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.626410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.635599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.635614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.644299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.644314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.653352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.653366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.662021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.662035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.670423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.670441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.679342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.679357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.688443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.688458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.697624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.697639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.706312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.706327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.715403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.715418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.050 [2024-10-01 16:34:27.724423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.050 [2024-10-01 16:34:27.724438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.732983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.732998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.741646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.741661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.750798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.750814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.759474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.759489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.767487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.767502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.776826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.776841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.786178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.786193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.794928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.794942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.803626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.803641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.812335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.812349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.820845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.820860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.829504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.829519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.837782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.837804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.846614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.846628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.855613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.855628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.863657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.863671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.872680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.872695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.881401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.881417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.890845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.890860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.899827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.899842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.311 [2024-10-01 16:34:27.908667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.311 [2024-10-01 16:34:27.908681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.917763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.917778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.926479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.926494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.935390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.935405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.943875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.943890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.952779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.952794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.960984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.960998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.969835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.969850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.978796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.978811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.312 [2024-10-01 16:34:27.987188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.312 [2024-10-01 16:34:27.987203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:27.996198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:27.996214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.005385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.005403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.014577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.014591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.023039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.023053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.032055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.032070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.041303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.041318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.049945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.049960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.059000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.059014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.067660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.067675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.076631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.076646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.085554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.085569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.094889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.094905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.103137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.103152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.111551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.111566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.120276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.120292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.128908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.128923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.137656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.137671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 18623.67 IOPS, 145.50 MiB/s [2024-10-01 16:34:28.145902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.145917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.154865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.154880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.163669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.163685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.172340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.172355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.181431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.181446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.190329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.190343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.199398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.199413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.207902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.207917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.216978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.216993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.225622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.225637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.234604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.234618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.243830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.243845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.574 [2024-10-01 16:34:28.253096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.574 [2024-10-01 16:34:28.253111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.834 [2024-10-01 16:34:28.261680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.834 [2024-10-01 16:34:28.261695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.834 [2024-10-01 16:34:28.270511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.834 [2024-10-01 16:34:28.270526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.834 [2024-10-01 16:34:28.279038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.834 [2024-10-01 16:34:28.279053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.287619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.287634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.296192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.296207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.304927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.304942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.313377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.313392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.321799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.321814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.330974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.330990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.339733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.339748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.347909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.347924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.357051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.357066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.366092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.366107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.375212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.375227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.384102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.384117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.392741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.392756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.401369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.401384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.410137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.410152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.419217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.419232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.427871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.427887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.437041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.437056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.445245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.445260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.453779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.453794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.462622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.462637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.471066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.471081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.479804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.479819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.488572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.488587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.497484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.497498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.506700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.506715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.835 [2024-10-01 16:34:28.514951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.835 [2024-10-01 16:34:28.514965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.523246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.523261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.532432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.532447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.540998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.541013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.550466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.550482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.559133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.559148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.567783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.567798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.576585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.576600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.585471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.585486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.594014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.594036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.602998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.603013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.611712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.611727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.621238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.621255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.629989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.630003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.638490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.638504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.647469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.647484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.656143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.656157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.664282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.664301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.673007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.673022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.099 [2024-10-01 16:34:28.681720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.099 [2024-10-01 16:34:28.681735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.690324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.690339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.698904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.698918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.708113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.708128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.716927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.716942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.725317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.725332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.733752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.733766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.742788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.742804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.750988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.751003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.760002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.760017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.768856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.768870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.100 [2024-10-01 16:34:28.777804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.100 [2024-10-01 16:34:28.777818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.786525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.786539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.795600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.795615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.804366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.804381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.812748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.812762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.821562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.821577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.830681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.830699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.839148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.839163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.848639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.848654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.857405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.857420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.866304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.866319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.875051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.875066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.883613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.883628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.892661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.892675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.901653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.901668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.910145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.910159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.919029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.919044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.927897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.927910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.936761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.936776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.945187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.945201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.954098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.954113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.962241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.962256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.975873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.975888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.983996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.984011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:28.992664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:28.992679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:29.002274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:29.002292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:29.010688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:29.010702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:29.019209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:29.019224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:29.028569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:29.028584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.361 [2024-10-01 16:34:29.037626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.361 [2024-10-01 16:34:29.037640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.046440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.621 [2024-10-01 16:34:29.046455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.054951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.621 [2024-10-01 16:34:29.054965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.064157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.621 [2024-10-01 16:34:29.064172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.072883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.621 [2024-10-01 16:34:29.072897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.081547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.621 [2024-10-01 16:34:29.081561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.621 [2024-10-01 16:34:29.090359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.090373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.099405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.099420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.108684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.108698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.117852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.117867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.126830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.126845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.135640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.135654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.145059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.145074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 18633.00 IOPS, 145.57 MiB/s [2024-10-01 16:34:29.153625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.153640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.162166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.162181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.171621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.171636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.180366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.180381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.189399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.189414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.198284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.198299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.207112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.207126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.216344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.216359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.224506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.224520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.233533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.233548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.242100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.242114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.250487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.250502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.259915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.259930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.268659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.268673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.277798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.277813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.286375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.286390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.622 [2024-10-01 16:34:29.295295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.622 [2024-10-01 16:34:29.295309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.304763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.304778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.313551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.313565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.322599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.322614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.330724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.330739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.339824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.339838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.348596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.348611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.357184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.357198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.365911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.365926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.374861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.374875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.383635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.383649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.392362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.392377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.401168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.401183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.410275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.410290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.419023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.419038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.428276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.428291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.437114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.437129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.445794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.445809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.454648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.454662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.463095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.463110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.472275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.472290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.481070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.481084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.489469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.489484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.498548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.498562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.506598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.506612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.515583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.515597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.524377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.524392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.533009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.533023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.542018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.542032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.550634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.550648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.883 [2024-10-01 16:34:29.559614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.883 [2024-10-01 16:34:29.559629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.568853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.568868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.577513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.577528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.586531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.586545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.595336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.595351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.604342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.604356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.613418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.613433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.622494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.622508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.630792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.630807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.639515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.639530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.648519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.648535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.657300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.657317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.666676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.666691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.674660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.674675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.683763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.683778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.692432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.692448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.701063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.701077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.710208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.710224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.719171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.719186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.727934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.727950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.736696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.736712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.745240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.745255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.754274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.754289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.763508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.763523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.772504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.772519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.781277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.781292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.790214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.790229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.798925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.798940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.808440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.808454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.816632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.816646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.144 [2024-10-01 16:34:29.825495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.144 [2024-10-01 16:34:29.825511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.834551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.834570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.843440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.843456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.851871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.851885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.860679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.860694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.869390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.869405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.877830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.877845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.886042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.886057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.894854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.894868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.904023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.904037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.912701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.912717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.921726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.921741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.930666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.930682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.939649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.939664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.948721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.948736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.957806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.957822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.966346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.966361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.975326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.975341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.983454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.983469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:29.992474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:29.992489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.001589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.001608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.010840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.010857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.019893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.019908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.028916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.028931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.037840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.037855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.046901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.046916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.055164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.055180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.064673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.064688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.073522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.073538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.405 [2024-10-01 16:34:30.082627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.405 [2024-10-01 16:34:30.082642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.091667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.091683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.099779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.099794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.109137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.109152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.117892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.117907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.127060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.127075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.135421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.135435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.144315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.144330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 18632.80 IOPS, 145.57 MiB/s [2024-10-01 16:34:30.152157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.152172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 00:09:38.666 Latency(us) 00:09:38.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.666 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:38.666 Nvme1n1 : 5.01 18642.59 145.65 0.00 0.00 6859.33 2785.28 17745.13 00:09:38.666 =================================================================================================================== 00:09:38.666 Total : 18642.59 145.65 0.00 0.00 6859.33 2785.28 17745.13 00:09:38.666 [2024-10-01 16:34:30.158797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.158811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.166816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.166827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.174837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.174850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.182864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.182877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.190878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.190889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.198899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.198909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.206917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.206926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.214938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.214947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.222958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.222968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.230982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.230990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.239003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.239011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.247024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.247033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.255042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.255050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.263063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.263071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.271083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.271093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.279103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.279110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 [2024-10-01 16:34:30.287124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.666 [2024-10-01 16:34:30.287132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2554577) - No such process 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2554577 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 delay0 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.666 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:38.926 [2024-10-01 16:34:30.437054] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:45.580 [2024-10-01 16:34:36.673341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe413e0 is same with the state(6) to be set 00:09:45.580 Initializing NVMe Controllers 00:09:45.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:45.580 Initialization complete. Launching workers. 00:09:45.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 112 00:09:45.581 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 399, failed to submit 33 00:09:45.581 success 211, unsuccessful 188, failed 0 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.581 rmmod nvme_tcp 00:09:45.581 rmmod nvme_fabrics 00:09:45.581 rmmod nvme_keyring 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2552721 ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2552721 ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2552721' 00:09:45.581 killing process with pid 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2552721 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.581 16:34:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.496 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.496 00:09:47.496 real 0m32.863s 00:09:47.496 user 0m44.548s 00:09:47.496 sys 0m9.396s 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.496 ************************************ 00:09:47.496 END TEST nvmf_zcopy 00:09:47.496 ************************************ 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.496 ************************************ 00:09:47.496 START TEST nvmf_nmic 00:09:47.496 ************************************ 00:09:47.496 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.760 * Looking for test storage... 00:09:47.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.760 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.761 --rc genhtml_branch_coverage=1 00:09:47.761 --rc genhtml_function_coverage=1 00:09:47.761 --rc genhtml_legend=1 00:09:47.761 --rc geninfo_all_blocks=1 00:09:47.761 --rc geninfo_unexecuted_blocks=1 00:09:47.761 00:09:47.761 ' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.761 --rc genhtml_branch_coverage=1 00:09:47.761 --rc genhtml_function_coverage=1 00:09:47.761 --rc genhtml_legend=1 00:09:47.761 --rc geninfo_all_blocks=1 00:09:47.761 --rc geninfo_unexecuted_blocks=1 00:09:47.761 00:09:47.761 ' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.761 --rc genhtml_branch_coverage=1 00:09:47.761 --rc genhtml_function_coverage=1 00:09:47.761 --rc genhtml_legend=1 00:09:47.761 --rc geninfo_all_blocks=1 00:09:47.761 --rc geninfo_unexecuted_blocks=1 00:09:47.761 00:09:47.761 ' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.761 --rc genhtml_branch_coverage=1 00:09:47.761 --rc genhtml_function_coverage=1 00:09:47.761 --rc genhtml_legend=1 00:09:47.761 --rc geninfo_all_blocks=1 00:09:47.761 --rc geninfo_unexecuted_blocks=1 00:09:47.761 00:09:47.761 ' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:47.761 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.762 16:34:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.345 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.346 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:09:54.607 00:09:54.607 --- 10.0.0.2 ping statistics --- 00:09:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.607 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:54.607 00:09:54.607 --- 10.0.0.1 ping statistics --- 00:09:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.607 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2560464 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2560464 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2560464 ']' 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.607 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.868 [2024-10-01 16:34:46.302184] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:09:54.868 [2024-10-01 16:34:46.302231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.868 [2024-10-01 16:34:46.382590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.868 [2024-10-01 16:34:46.445994] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.868 [2024-10-01 16:34:46.446031] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.868 [2024-10-01 16:34:46.446038] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.868 [2024-10-01 16:34:46.446044] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.868 [2024-10-01 16:34:46.446050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.868 [2024-10-01 16:34:46.446085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.868 [2024-10-01 16:34:46.446177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.868 [2024-10-01 16:34:46.446332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.868 [2024-10-01 16:34:46.446341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 [2024-10-01 16:34:47.216123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 Malloc0 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.807 [2024-10-01 16:34:47.255759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:55.807 test case1: single bdev can't be used in multiple subsystems 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.807 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.808 [2024-10-01 16:34:47.279637] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:55.808 [2024-10-01 16:34:47.279654] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:55.808 [2024-10-01 16:34:47.279661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.808 request: 00:09:55.808 { 00:09:55.808 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.808 "namespace": { 00:09:55.808 "bdev_name": "Malloc0", 00:09:55.808 "no_auto_visible": false 00:09:55.808 }, 00:09:55.808 "method": "nvmf_subsystem_add_ns", 00:09:55.808 "req_id": 1 00:09:55.808 } 00:09:55.808 Got JSON-RPC error response 00:09:55.808 response: 00:09:55.808 { 00:09:55.808 "code": -32602, 00:09:55.808 "message": "Invalid parameters" 00:09:55.808 } 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:55.808 Adding namespace failed - expected result. 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:55.808 test case2: host connect to nvmf target in multiple paths 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.808 [2024-10-01 16:34:47.291780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.808 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.191 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:59.102 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.102 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.102 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.102 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.102 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:01.012 16:34:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.012 [global] 00:10:01.012 thread=1 00:10:01.012 invalidate=1 00:10:01.012 rw=write 00:10:01.012 time_based=1 00:10:01.012 runtime=1 00:10:01.012 ioengine=libaio 00:10:01.012 direct=1 00:10:01.012 bs=4096 00:10:01.012 iodepth=1 00:10:01.012 norandommap=0 00:10:01.012 numjobs=1 00:10:01.012 00:10:01.012 verify_dump=1 00:10:01.013 verify_backlog=512 00:10:01.013 verify_state_save=0 00:10:01.013 do_verify=1 00:10:01.013 verify=crc32c-intel 00:10:01.013 [job0] 00:10:01.013 filename=/dev/nvme0n1 00:10:01.013 Could not set queue depth (nvme0n1) 00:10:01.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.273 fio-3.35 00:10:01.273 Starting 1 thread 00:10:02.656 00:10:02.656 job0: (groupid=0, jobs=1): err= 0: pid=2561862: Tue Oct 1 16:34:53 2024 00:10:02.656 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:02.657 slat (nsec): min=6401, max=57359, avg=26801.81, stdev=4465.86 00:10:02.657 clat (usec): min=466, max=1225, avg=913.05, stdev=136.80 00:10:02.657 lat (usec): min=493, max=1269, avg=939.85, stdev=137.10 00:10:02.657 clat percentiles (usec): 00:10:02.657 | 1.00th=[ 529], 5.00th=[ 603], 10.00th=[ 717], 20.00th=[ 816], 00:10:02.657 | 30.00th=[ 865], 40.00th=[ 906], 50.00th=[ 955], 60.00th=[ 979], 00:10:02.657 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:10:02.657 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:02.657 | 99.99th=[ 1221] 00:10:02.657 write: IOPS=887, BW=3548KiB/s (3634kB/s)(3552KiB/1001msec); 0 zone resets 00:10:02.657 slat (usec): min=8, max=33182, avg=67.38, stdev=1112.58 00:10:02.657 clat (usec): min=144, max=800, avg=505.58, stdev=95.80 00:10:02.657 lat (usec): min=153, max=33664, avg=572.96, stdev=1116.11 00:10:02.657 clat percentiles (usec): 00:10:02.657 | 1.00th=[ 245], 5.00th=[ 326], 10.00th=[ 392], 20.00th=[ 416], 00:10:02.657 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 529], 00:10:02.657 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 627], 95.00th=[ 644], 00:10:02.657 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 799], 99.95th=[ 799], 00:10:02.657 | 99.99th=[ 799] 00:10:02.657 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.657 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.657 lat (usec) : 250=0.71%, 500=27.29%, 750=40.14%, 1000=20.64% 00:10:02.657 lat (msec) : 2=11.21% 00:10:02.657 cpu : usr=2.00%, sys=6.20%, ctx=1404, majf=0, minf=1 00:10:02.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.657 issued rwts: total=512,888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.657 00:10:02.657 Run status group 0 (all jobs): 00:10:02.657 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:02.657 WRITE: bw=3548KiB/s (3634kB/s), 3548KiB/s-3548KiB/s (3634kB/s-3634kB/s), io=3552KiB (3637kB), run=1001-1001msec 00:10:02.657 00:10:02.657 Disk stats (read/write): 00:10:02.657 nvme0n1: ios=538/707, merge=0/0, ticks=1391/288, in_queue=1679, util=99.00% 00:10:02.657 16:34:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.657 rmmod nvme_tcp 00:10:02.657 rmmod nvme_fabrics 00:10:02.657 rmmod nvme_keyring 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2560464 ']' 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2560464 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2560464 ']' 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2560464 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2560464 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2560464' 00:10:02.657 killing process with pid 2560464 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2560464 00:10:02.657 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2560464 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.917 16:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.463 00:10:05.463 real 0m17.472s 00:10:05.463 user 0m41.699s 00:10:05.463 sys 0m6.215s 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.463 ************************************ 00:10:05.463 END TEST nvmf_nmic 00:10:05.463 ************************************ 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.463 ************************************ 00:10:05.463 START TEST nvmf_fio_target 00:10:05.463 ************************************ 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:05.463 * Looking for test storage... 00:10:05.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.463 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.464 --rc genhtml_branch_coverage=1 00:10:05.464 --rc genhtml_function_coverage=1 00:10:05.464 --rc genhtml_legend=1 00:10:05.464 --rc geninfo_all_blocks=1 00:10:05.464 --rc geninfo_unexecuted_blocks=1 00:10:05.464 00:10:05.464 ' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.464 --rc genhtml_branch_coverage=1 00:10:05.464 --rc genhtml_function_coverage=1 00:10:05.464 --rc genhtml_legend=1 00:10:05.464 --rc geninfo_all_blocks=1 00:10:05.464 --rc geninfo_unexecuted_blocks=1 00:10:05.464 00:10:05.464 ' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.464 --rc genhtml_branch_coverage=1 00:10:05.464 --rc genhtml_function_coverage=1 00:10:05.464 --rc genhtml_legend=1 00:10:05.464 --rc geninfo_all_blocks=1 00:10:05.464 --rc geninfo_unexecuted_blocks=1 00:10:05.464 00:10:05.464 ' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:05.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.464 --rc genhtml_branch_coverage=1 00:10:05.464 --rc genhtml_function_coverage=1 00:10:05.464 --rc genhtml_legend=1 00:10:05.464 --rc geninfo_all_blocks=1 00:10:05.464 --rc geninfo_unexecuted_blocks=1 00:10:05.464 00:10:05.464 ' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.464 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.465 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:12.049 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:12.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:12.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:12.049 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:12.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:10:12.050 00:10:12.050 --- 10.0.0.2 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:12.050 00:10:12.050 --- 10.0.0.1 ping statistics --- 00:10:12.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.050 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2566061 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2566061 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2566061 ']' 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.050 16:35:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.050 [2024-10-01 16:35:03.609417] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:12.050 [2024-10-01 16:35:03.609481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.050 [2024-10-01 16:35:03.698890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.310 [2024-10-01 16:35:03.793541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.310 [2024-10-01 16:35:03.793605] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.310 [2024-10-01 16:35:03.793613] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.310 [2024-10-01 16:35:03.793620] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.310 [2024-10-01 16:35:03.793626] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.310 [2024-10-01 16:35:03.793751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.310 [2024-10-01 16:35:03.793890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.310 [2024-10-01 16:35:03.794041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.310 [2024-10-01 16:35:03.794044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.879 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.139 [2024-10-01 16:35:04.723541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.139 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.398 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:13.398 16:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.697 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:13.697 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.697 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:13.697 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.958 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:13.958 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:14.218 16:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.478 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:14.478 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.739 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:14.739 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.998 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:14.998 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:14.998 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.258 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.258 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.518 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.518 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.781 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.041 [2024-10-01 16:35:07.469133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.041 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:16.041 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:16.301 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:17.684 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:20.223 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.223 [global] 00:10:20.223 thread=1 00:10:20.223 invalidate=1 00:10:20.223 rw=write 00:10:20.223 time_based=1 00:10:20.223 runtime=1 00:10:20.223 ioengine=libaio 00:10:20.223 direct=1 00:10:20.223 bs=4096 00:10:20.223 iodepth=1 00:10:20.223 norandommap=0 00:10:20.223 numjobs=1 00:10:20.223 00:10:20.223 verify_dump=1 00:10:20.223 verify_backlog=512 00:10:20.223 verify_state_save=0 00:10:20.223 do_verify=1 00:10:20.223 verify=crc32c-intel 00:10:20.223 [job0] 00:10:20.223 filename=/dev/nvme0n1 00:10:20.223 [job1] 00:10:20.223 filename=/dev/nvme0n2 00:10:20.223 [job2] 00:10:20.223 filename=/dev/nvme0n3 00:10:20.223 [job3] 00:10:20.223 filename=/dev/nvme0n4 00:10:20.223 Could not set queue depth (nvme0n1) 00:10:20.223 Could not set queue depth (nvme0n2) 00:10:20.223 Could not set queue depth (nvme0n3) 00:10:20.223 Could not set queue depth (nvme0n4) 00:10:20.223 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.223 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.223 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.223 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.223 fio-3.35 00:10:20.223 Starting 4 threads 00:10:21.605 00:10:21.605 job0: (groupid=0, jobs=1): err= 0: pid=2567549: Tue Oct 1 16:35:12 2024 00:10:21.605 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:21.605 slat (nsec): min=2404, max=54203, avg=21459.80, stdev=8889.17 00:10:21.605 clat (usec): min=148, max=1681, avg=503.63, stdev=87.27 00:10:21.605 lat (usec): min=156, max=1693, avg=525.09, stdev=87.60 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 297], 5.00th=[ 379], 10.00th=[ 412], 20.00th=[ 449], 00:10:21.605 | 30.00th=[ 478], 40.00th=[ 494], 50.00th=[ 506], 60.00th=[ 519], 00:10:21.605 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 603], 00:10:21.605 | 99.00th=[ 758], 99.50th=[ 824], 99.90th=[ 1156], 99.95th=[ 1680], 00:10:21.605 | 99.99th=[ 1680] 00:10:21.605 write: IOPS=1238, BW=4955KiB/s (5074kB/s)(4960KiB/1001msec); 0 zone resets 00:10:21.605 slat (nsec): min=9651, max=54853, avg=25208.84, stdev=12283.49 00:10:21.605 clat (usec): min=88, max=1895, avg=336.36, stdev=97.59 00:10:21.605 lat (usec): min=98, max=1929, avg=361.57, stdev=102.68 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 99], 5.00th=[ 212], 10.00th=[ 231], 20.00th=[ 247], 00:10:21.605 | 30.00th=[ 285], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 351], 00:10:21.605 | 70.00th=[ 383], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 465], 00:10:21.605 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 586], 99.95th=[ 1893], 00:10:21.605 | 99.99th=[ 1893] 00:10:21.605 bw ( KiB/s): min= 5328, max= 5328, per=38.48%, avg=5328.00, stdev= 0.00, samples=1 00:10:21.605 iops : min= 1332, max= 1332, avg=1332.00, stdev= 0.00, samples=1 00:10:21.605 lat (usec) : 100=0.62%, 250=11.17%, 500=62.32%, 750=25.35%, 1000=0.35% 00:10:21.605 lat (msec) : 2=0.18% 00:10:21.605 cpu : usr=3.50%, sys=4.90%, ctx=2266, majf=0, minf=1 00:10:21.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 issued rwts: total=1024,1240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.605 job1: (groupid=0, jobs=1): err= 0: pid=2567550: Tue Oct 1 16:35:12 2024 00:10:21.605 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:21.605 slat (nsec): min=6813, max=46576, avg=26666.16, stdev=2697.85 00:10:21.605 clat (usec): min=441, max=2800, avg=940.97, stdev=145.73 00:10:21.605 lat (usec): min=468, max=2830, avg=967.64, stdev=146.05 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 603], 5.00th=[ 734], 10.00th=[ 807], 20.00th=[ 898], 00:10:21.605 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 963], 00:10:21.605 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1045], 00:10:21.605 | 99.00th=[ 1156], 99.50th=[ 1893], 99.90th=[ 2802], 99.95th=[ 2802], 00:10:21.605 | 99.99th=[ 2802] 00:10:21.605 write: IOPS=869, BW=3477KiB/s (3560kB/s)(3480KiB/1001msec); 0 zone resets 00:10:21.605 slat (nsec): min=9538, max=56249, avg=31190.36, stdev=9881.43 00:10:21.605 clat (usec): min=176, max=3757, avg=537.07, stdev=157.61 00:10:21.605 lat (usec): min=187, max=3792, avg=568.26, stdev=160.46 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 265], 5.00th=[ 343], 10.00th=[ 379], 20.00th=[ 433], 00:10:21.605 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 570], 00:10:21.605 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 717], 00:10:21.605 | 99.00th=[ 775], 99.50th=[ 807], 99.90th=[ 3752], 99.95th=[ 3752], 00:10:21.605 | 99.99th=[ 3752] 00:10:21.605 bw ( KiB/s): min= 4096, max= 4096, per=29.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.605 lat (usec) : 250=0.29%, 500=23.23%, 750=40.45%, 1000=29.81% 00:10:21.605 lat (msec) : 2=6.01%, 4=0.22% 00:10:21.605 cpu : usr=2.90%, sys=5.20%, ctx=1386, majf=0, minf=1 00:10:21.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 issued rwts: total=512,870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.605 job2: (groupid=0, jobs=1): err= 0: pid=2567551: Tue Oct 1 16:35:12 2024 00:10:21.605 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:21.605 slat (nsec): min=7511, max=62202, avg=26390.22, stdev=4322.14 00:10:21.605 clat (usec): min=474, max=2064, avg=934.67, stdev=134.64 00:10:21.605 lat (usec): min=482, max=2090, avg=961.06, stdev=135.13 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 619], 5.00th=[ 709], 10.00th=[ 775], 20.00th=[ 848], 00:10:21.605 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 971], 00:10:21.605 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1090], 00:10:21.605 | 99.00th=[ 1352], 99.50th=[ 1663], 99.90th=[ 2073], 99.95th=[ 2073], 00:10:21.605 | 99.99th=[ 2073] 00:10:21.605 write: IOPS=849, BW=3397KiB/s (3478kB/s)(3400KiB/1001msec); 0 zone resets 00:10:21.605 slat (nsec): min=9287, max=65390, avg=30397.42, stdev=9322.65 00:10:21.605 clat (usec): min=244, max=893, avg=555.08, stdev=107.79 00:10:21.605 lat (usec): min=255, max=929, avg=585.48, stdev=110.92 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[ 310], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 461], 00:10:21.605 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 578], 00:10:21.605 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 717], 00:10:21.605 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 898], 99.95th=[ 898], 00:10:21.605 | 99.99th=[ 898] 00:10:21.605 bw ( KiB/s): min= 4096, max= 4096, per=29.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.605 lat (usec) : 250=0.07%, 500=18.28%, 750=45.01%, 1000=27.68% 00:10:21.605 lat (msec) : 2=8.88%, 4=0.07% 00:10:21.605 cpu : usr=3.50%, sys=4.50%, ctx=1362, majf=0, minf=2 00:10:21.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.605 issued rwts: total=512,850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.605 job3: (groupid=0, jobs=1): err= 0: pid=2567552: Tue Oct 1 16:35:12 2024 00:10:21.605 read: IOPS=16, BW=67.8KiB/s (69.4kB/s)(68.0KiB/1003msec) 00:10:21.605 slat (nsec): min=25001, max=25901, avg=25426.71, stdev=316.37 00:10:21.605 clat (usec): min=40812, max=42020, avg=41372.98, stdev=502.21 00:10:21.605 lat (usec): min=40837, max=42045, avg=41398.41, stdev=502.20 00:10:21.605 clat percentiles (usec): 00:10:21.605 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:21.605 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:21.605 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:21.606 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.606 | 99.99th=[42206] 00:10:21.606 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:21.606 slat (nsec): min=9826, max=64118, avg=30199.75, stdev=9075.66 00:10:21.606 clat (usec): min=219, max=888, avg=547.56, stdev=115.00 00:10:21.606 lat (usec): min=232, max=922, avg=577.76, stdev=117.48 00:10:21.606 clat percentiles (usec): 00:10:21.606 | 1.00th=[ 289], 5.00th=[ 347], 10.00th=[ 400], 20.00th=[ 453], 00:10:21.606 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 578], 00:10:21.606 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 742], 00:10:21.606 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 889], 99.95th=[ 889], 00:10:21.606 | 99.99th=[ 889] 00:10:21.606 bw ( KiB/s): min= 4096, max= 4096, per=29.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.606 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.606 lat (usec) : 250=0.19%, 500=31.57%, 750=61.44%, 1000=3.59% 00:10:21.606 lat (msec) : 50=3.21% 00:10:21.606 cpu : usr=0.90%, sys=1.40%, ctx=529, majf=0, minf=1 00:10:21.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.606 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.606 00:10:21.606 Run status group 0 (all jobs): 00:10:21.606 READ: bw=8235KiB/s (8433kB/s), 67.8KiB/s-4092KiB/s (69.4kB/s-4190kB/s), io=8260KiB (8458kB), run=1001-1003msec 00:10:21.606 WRITE: bw=13.5MiB/s (14.2MB/s), 2042KiB/s-4955KiB/s (2091kB/s-5074kB/s), io=13.6MiB (14.2MB), run=1001-1003msec 00:10:21.606 00:10:21.606 Disk stats (read/write): 00:10:21.606 nvme0n1: ios=911/1024, merge=0/0, ticks=1126/338, in_queue=1464, util=97.70% 00:10:21.606 nvme0n2: ios=537/579, merge=0/0, ticks=1410/239, in_queue=1649, util=97.86% 00:10:21.606 nvme0n3: ios=512/584, merge=0/0, ticks=458/254, in_queue=712, util=88.75% 00:10:21.606 nvme0n4: ios=13/512, merge=0/0, ticks=539/264, in_queue=803, util=89.60% 00:10:21.606 16:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:21.606 [global] 00:10:21.606 thread=1 00:10:21.606 invalidate=1 00:10:21.606 rw=randwrite 00:10:21.606 time_based=1 00:10:21.606 runtime=1 00:10:21.606 ioengine=libaio 00:10:21.606 direct=1 00:10:21.606 bs=4096 00:10:21.606 iodepth=1 00:10:21.606 norandommap=0 00:10:21.606 numjobs=1 00:10:21.606 00:10:21.606 verify_dump=1 00:10:21.606 verify_backlog=512 00:10:21.606 verify_state_save=0 00:10:21.606 do_verify=1 00:10:21.606 verify=crc32c-intel 00:10:21.606 [job0] 00:10:21.606 filename=/dev/nvme0n1 00:10:21.606 [job1] 00:10:21.606 filename=/dev/nvme0n2 00:10:21.606 [job2] 00:10:21.606 filename=/dev/nvme0n3 00:10:21.606 [job3] 00:10:21.606 filename=/dev/nvme0n4 00:10:21.606 Could not set queue depth (nvme0n1) 00:10:21.606 Could not set queue depth (nvme0n2) 00:10:21.606 Could not set queue depth (nvme0n3) 00:10:21.606 Could not set queue depth (nvme0n4) 00:10:21.873 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.873 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.873 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.873 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.873 fio-3.35 00:10:21.873 Starting 4 threads 00:10:23.265 00:10:23.265 job0: (groupid=0, jobs=1): err= 0: pid=2568028: Tue Oct 1 16:35:14 2024 00:10:23.265 read: IOPS=497, BW=1990KiB/s (2038kB/s)(2052KiB/1031msec) 00:10:23.265 slat (nsec): min=7589, max=49516, avg=26918.14, stdev=6804.82 00:10:23.265 clat (usec): min=540, max=41360, avg=972.81, stdev=1789.06 00:10:23.265 lat (usec): min=572, max=41385, avg=999.73, stdev=1789.08 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 594], 5.00th=[ 725], 10.00th=[ 775], 20.00th=[ 832], 00:10:23.265 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[ 898], 60.00th=[ 922], 00:10:23.265 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 1004], 95.00th=[ 1037], 00:10:23.265 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[41157], 99.95th=[41157], 00:10:23.265 | 99.99th=[41157] 00:10:23.265 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:10:23.265 slat (nsec): min=9385, max=94953, avg=25758.92, stdev=11039.04 00:10:23.265 clat (usec): min=160, max=841, avg=468.52, stdev=159.01 00:10:23.265 lat (usec): min=174, max=873, avg=494.28, stdev=160.77 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 241], 20.00th=[ 322], 00:10:23.265 | 30.00th=[ 371], 40.00th=[ 412], 50.00th=[ 469], 60.00th=[ 519], 00:10:23.265 | 70.00th=[ 562], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 725], 00:10:23.265 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 840], 99.95th=[ 840], 00:10:23.265 | 99.99th=[ 840] 00:10:23.265 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=2 00:10:23.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:23.265 lat (usec) : 250=7.29%, 500=30.19%, 750=28.89%, 1000=30.25% 00:10:23.265 lat (msec) : 2=3.32%, 50=0.07% 00:10:23.265 cpu : usr=3.50%, sys=4.08%, ctx=1539, majf=0, minf=2 00:10:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.265 job1: (groupid=0, jobs=1): err= 0: pid=2568029: Tue Oct 1 16:35:14 2024 00:10:23.265 read: IOPS=18, BW=75.4KiB/s (77.2kB/s)(76.0KiB/1008msec) 00:10:23.265 slat (nsec): min=9950, max=32131, avg=26767.05, stdev=4575.22 00:10:23.265 clat (usec): min=869, max=42013, avg=37181.15, stdev=12798.72 00:10:23.265 lat (usec): min=898, max=42039, avg=37207.91, stdev=12797.41 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 873], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[41157], 00:10:23.265 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:23.265 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:23.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.265 | 99.99th=[42206] 00:10:23.265 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:23.265 slat (nsec): min=8749, max=56127, avg=27550.95, stdev=10188.83 00:10:23.265 clat (usec): min=291, max=775, avg=551.51, stdev=100.53 00:10:23.265 lat (usec): min=303, max=807, avg=579.06, stdev=105.65 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 314], 5.00th=[ 351], 10.00th=[ 416], 20.00th=[ 469], 00:10:23.265 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:10:23.265 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 701], 00:10:23.265 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 775], 99.95th=[ 775], 00:10:23.265 | 99.99th=[ 775] 00:10:23.265 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.265 lat (usec) : 500=27.12%, 750=68.55%, 1000=1.13% 00:10:23.265 lat (msec) : 50=3.20% 00:10:23.265 cpu : usr=0.99%, sys=1.89%, ctx=531, majf=0, minf=1 00:10:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.265 job2: (groupid=0, jobs=1): err= 0: pid=2568030: Tue Oct 1 16:35:14 2024 00:10:23.265 read: IOPS=576, BW=2306KiB/s (2361kB/s)(2308KiB/1001msec) 00:10:23.265 slat (nsec): min=6583, max=56741, avg=26273.34, stdev=7979.23 00:10:23.265 clat (usec): min=417, max=42017, avg=1001.53, stdev=3370.49 00:10:23.265 lat (usec): min=445, max=42036, avg=1027.80, stdev=3370.30 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 490], 5.00th=[ 562], 10.00th=[ 586], 20.00th=[ 635], 00:10:23.265 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 750], 00:10:23.265 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 873], 00:10:23.265 | 99.00th=[ 1565], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:23.265 | 99.99th=[42206] 00:10:23.265 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:23.265 slat (nsec): min=9011, max=71798, avg=30753.52, stdev=10426.44 00:10:23.265 clat (usec): min=163, max=1239, avg=354.59, stdev=99.01 00:10:23.265 lat (usec): min=173, max=1250, avg=385.34, stdev=101.37 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 247], 20.00th=[ 285], 00:10:23.265 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 359], 00:10:23.265 | 70.00th=[ 400], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 529], 00:10:23.265 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 1156], 99.95th=[ 1237], 00:10:23.265 | 99.99th=[ 1237] 00:10:23.265 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.265 lat (usec) : 250=6.68%, 500=53.03%, 750=26.30%, 1000=13.37% 00:10:23.265 lat (msec) : 2=0.31%, 4=0.06%, 50=0.25% 00:10:23.265 cpu : usr=3.60%, sys=5.80%, ctx=1604, majf=0, minf=1 00:10:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 issued rwts: total=577,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.265 job3: (groupid=0, jobs=1): err= 0: pid=2568031: Tue Oct 1 16:35:14 2024 00:10:23.265 read: IOPS=232, BW=931KiB/s (953kB/s)(932KiB/1001msec) 00:10:23.265 slat (nsec): min=6643, max=49008, avg=24359.47, stdev=7207.60 00:10:23.265 clat (usec): min=326, max=42126, avg=3012.58, stdev=9401.53 00:10:23.265 lat (usec): min=353, max=42152, avg=3036.94, stdev=9401.83 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 502], 5.00th=[ 562], 10.00th=[ 586], 20.00th=[ 644], 00:10:23.265 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 775], 00:10:23.265 | 70.00th=[ 807], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[41157], 00:10:23.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.265 | 99.99th=[42206] 00:10:23.265 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:23.265 slat (usec): min=9, max=106, avg=25.64, stdev=11.74 00:10:23.265 clat (usec): min=215, max=973, avg=536.68, stdev=127.69 00:10:23.265 lat (usec): min=235, max=1005, avg=562.32, stdev=133.06 00:10:23.265 clat percentiles (usec): 00:10:23.265 | 1.00th=[ 269], 5.00th=[ 326], 10.00th=[ 363], 20.00th=[ 424], 00:10:23.265 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 570], 00:10:23.265 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 742], 00:10:23.265 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 971], 99.95th=[ 971], 00:10:23.265 | 99.99th=[ 971] 00:10:23.265 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.265 lat (usec) : 250=0.27%, 500=25.64%, 750=55.57%, 1000=16.51% 00:10:23.265 lat (msec) : 2=0.27%, 50=1.74% 00:10:23.265 cpu : usr=0.80%, sys=2.40%, ctx=747, majf=0, minf=1 00:10:23.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.265 issued rwts: total=233,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.265 00:10:23.265 Run status group 0 (all jobs): 00:10:23.265 READ: bw=5207KiB/s (5332kB/s), 75.4KiB/s-2306KiB/s (77.2kB/s-2361kB/s), io=5368KiB (5497kB), run=1001-1031msec 00:10:23.265 WRITE: bw=11.6MiB/s (12.2MB/s), 2032KiB/s-4092KiB/s (2081kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:10:23.265 00:10:23.265 Disk stats (read/write): 00:10:23.265 nvme0n1: ios=562/731, merge=0/0, ticks=529/307, in_queue=836, util=92.69% 00:10:23.265 nvme0n2: ios=52/512, merge=0/0, ticks=594/222, in_queue=816, util=88.52% 00:10:23.265 nvme0n3: ios=611/1024, merge=0/0, ticks=574/248, in_queue=822, util=100.00% 00:10:23.265 nvme0n4: ios=33/512, merge=0/0, ticks=553/259, in_queue=812, util=89.65% 00:10:23.265 16:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:23.265 [global] 00:10:23.265 thread=1 00:10:23.265 invalidate=1 00:10:23.265 rw=write 00:10:23.265 time_based=1 00:10:23.265 runtime=1 00:10:23.265 ioengine=libaio 00:10:23.265 direct=1 00:10:23.265 bs=4096 00:10:23.265 iodepth=128 00:10:23.265 norandommap=0 00:10:23.265 numjobs=1 00:10:23.265 00:10:23.265 verify_dump=1 00:10:23.265 verify_backlog=512 00:10:23.265 verify_state_save=0 00:10:23.265 do_verify=1 00:10:23.265 verify=crc32c-intel 00:10:23.265 [job0] 00:10:23.265 filename=/dev/nvme0n1 00:10:23.265 [job1] 00:10:23.265 filename=/dev/nvme0n2 00:10:23.265 [job2] 00:10:23.265 filename=/dev/nvme0n3 00:10:23.265 [job3] 00:10:23.265 filename=/dev/nvme0n4 00:10:23.265 Could not set queue depth (nvme0n1) 00:10:23.266 Could not set queue depth (nvme0n2) 00:10:23.266 Could not set queue depth (nvme0n3) 00:10:23.266 Could not set queue depth (nvme0n4) 00:10:23.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.528 fio-3.35 00:10:23.528 Starting 4 threads 00:10:24.906 00:10:24.906 job0: (groupid=0, jobs=1): err= 0: pid=2568502: Tue Oct 1 16:35:16 2024 00:10:24.906 read: IOPS=6496, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1004msec) 00:10:24.906 slat (nsec): min=1218, max=9935.5k, avg=86100.64, stdev=637868.46 00:10:24.906 clat (usec): min=1529, max=20151, avg=10598.91, stdev=2511.02 00:10:24.906 lat (usec): min=3300, max=20165, avg=10685.01, stdev=2551.48 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 4293], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[ 9241], 00:10:24.906 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10421], 00:10:24.906 | 70.00th=[10945], 80.00th=[11994], 90.00th=[14353], 95.00th=[16057], 00:10:24.906 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:10:24.906 | 99.99th=[20055] 00:10:24.906 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:24.906 slat (usec): min=2, max=7083, avg=60.97, stdev=238.12 00:10:24.906 clat (usec): min=1592, max=19886, avg=8744.24, stdev=1844.39 00:10:24.906 lat (usec): min=1607, max=19889, avg=8805.21, stdev=1863.98 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 3032], 5.00th=[ 4424], 10.00th=[ 5932], 20.00th=[ 7898], 00:10:24.906 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:24.906 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[10552], 00:10:24.906 | 99.00th=[10683], 99.50th=[14746], 99.90th=[17171], 99.95th=[19268], 00:10:24.906 | 99.99th=[19792] 00:10:24.906 bw ( KiB/s): min=24592, max=28656, per=26.86%, avg=26624.00, stdev=2873.68, samples=2 00:10:24.906 iops : min= 6148, max= 7164, avg=6656.00, stdev=718.42, samples=2 00:10:24.906 lat (msec) : 2=0.08%, 4=1.90%, 10=63.43%, 20=34.57%, 50=0.02% 00:10:24.906 cpu : usr=4.39%, sys=6.48%, ctx=858, majf=0, minf=1 00:10:24.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:24.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.906 issued rwts: total=6522,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.906 job1: (groupid=0, jobs=1): err= 0: pid=2568503: Tue Oct 1 16:35:16 2024 00:10:24.906 read: IOPS=8230, BW=32.1MiB/s (33.7MB/s)(32.2MiB/1003msec) 00:10:24.906 slat (nsec): min=1174, max=11533k, avg=62992.34, stdev=428306.22 00:10:24.906 clat (usec): min=1931, max=21727, avg=7989.76, stdev=2035.34 00:10:24.906 lat (usec): min=2788, max=21730, avg=8052.75, stdev=2058.34 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6652], 00:10:24.906 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7898], 00:10:24.906 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10814], 95.00th=[11731], 00:10:24.906 | 99.00th=[15795], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:10:24.906 | 99.99th=[21627] 00:10:24.906 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:10:24.906 slat (usec): min=2, max=5741, avg=50.63, stdev=243.26 00:10:24.906 clat (usec): min=1395, max=15321, avg=7038.00, stdev=1330.17 00:10:24.906 lat (usec): min=1403, max=15324, avg=7088.63, stdev=1349.49 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 2769], 5.00th=[ 4146], 10.00th=[ 5211], 20.00th=[ 6587], 00:10:24.906 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7242], 00:10:24.906 | 70.00th=[ 7373], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:10:24.906 | 99.00th=[10028], 99.50th=[10945], 99.90th=[13042], 99.95th=[13435], 00:10:24.906 | 99.99th=[15270] 00:10:24.906 bw ( KiB/s): min=33232, max=35888, per=34.87%, avg=34560.00, stdev=1878.08, samples=2 00:10:24.906 iops : min= 8308, max= 8972, avg=8640.00, stdev=469.52, samples=2 00:10:24.906 lat (msec) : 2=0.09%, 4=2.45%, 10=89.96%, 20=7.48%, 50=0.01% 00:10:24.906 cpu : usr=6.09%, sys=6.59%, ctx=1049, majf=0, minf=1 00:10:24.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:24.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.906 issued rwts: total=8255,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.906 job2: (groupid=0, jobs=1): err= 0: pid=2568504: Tue Oct 1 16:35:16 2024 00:10:24.906 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:10:24.906 slat (nsec): min=1293, max=15788k, avg=103431.81, stdev=807904.92 00:10:24.906 clat (usec): min=3753, max=41379, avg=12905.49, stdev=5035.42 00:10:24.906 lat (usec): min=3759, max=41382, avg=13008.93, stdev=5106.12 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:10:24.906 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11994], 00:10:24.906 | 70.00th=[14091], 80.00th=[16712], 90.00th=[19530], 95.00th=[22152], 00:10:24.906 | 99.00th=[30540], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:10:24.906 | 99.99th=[41157] 00:10:24.906 write: IOPS=4458, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1008msec); 0 zone resets 00:10:24.906 slat (usec): min=2, max=12244, avg=122.84, stdev=732.04 00:10:24.906 clat (usec): min=1711, max=79972, avg=16676.45, stdev=15142.74 00:10:24.906 lat (usec): min=1724, max=79982, avg=16799.30, stdev=15238.27 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 3720], 5.00th=[ 5538], 10.00th=[ 6718], 20.00th=[ 8717], 00:10:24.906 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[14746], 00:10:24.906 | 70.00th=[16319], 80.00th=[18220], 90.00th=[30278], 95.00th=[61604], 00:10:24.906 | 99.00th=[76022], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:10:24.906 | 99.99th=[80217] 00:10:24.906 bw ( KiB/s): min=14552, max=20384, per=17.63%, avg=17468.00, stdev=4123.85, samples=2 00:10:24.906 iops : min= 3638, max= 5096, avg=4367.00, stdev=1030.96, samples=2 00:10:24.906 lat (msec) : 2=0.02%, 4=0.76%, 10=37.64%, 20=49.29%, 50=9.32% 00:10:24.906 lat (msec) : 100=2.97% 00:10:24.906 cpu : usr=3.48%, sys=4.77%, ctx=440, majf=0, minf=1 00:10:24.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:24.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.906 issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.906 job3: (groupid=0, jobs=1): err= 0: pid=2568506: Tue Oct 1 16:35:16 2024 00:10:24.906 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:10:24.906 slat (nsec): min=1247, max=16244k, avg=84473.46, stdev=748182.00 00:10:24.906 clat (usec): min=3160, max=32401, avg=11678.31, stdev=3920.84 00:10:24.906 lat (usec): min=3725, max=39687, avg=11762.78, stdev=3991.57 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 5145], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9241], 00:10:24.906 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10552], 00:10:24.906 | 70.00th=[12125], 80.00th=[13960], 90.00th=[18482], 95.00th=[20317], 00:10:24.906 | 99.00th=[21890], 99.50th=[23462], 99.90th=[25035], 99.95th=[28443], 00:10:24.906 | 99.99th=[32375] 00:10:24.906 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:24.906 slat (usec): min=2, max=14814, avg=94.02, stdev=743.86 00:10:24.906 clat (usec): min=406, max=82140, avg=13516.78, stdev=13620.78 00:10:24.906 lat (usec): min=438, max=82151, avg=13610.80, stdev=13713.73 00:10:24.906 clat percentiles (usec): 00:10:24.906 | 1.00th=[ 1385], 5.00th=[ 3425], 10.00th=[ 4948], 20.00th=[ 6456], 00:10:24.906 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:10:24.906 | 70.00th=[14222], 80.00th=[16319], 90.00th=[19006], 95.00th=[46924], 00:10:24.906 | 99.00th=[74974], 99.50th=[76022], 99.90th=[82314], 99.95th=[82314], 00:10:24.906 | 99.99th=[82314] 00:10:24.906 bw ( KiB/s): min=17648, max=23312, per=20.67%, avg=20480.00, stdev=4005.05, samples=2 00:10:24.906 iops : min= 4412, max= 5828, avg=5120.00, stdev=1001.26, samples=2 00:10:24.906 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.06% 00:10:24.906 lat (msec) : 2=1.26%, 4=2.37%, 10=50.36%, 20=38.57%, 50=4.99% 00:10:24.906 lat (msec) : 100=2.34% 00:10:24.906 cpu : usr=4.39%, sys=5.08%, ctx=349, majf=0, minf=1 00:10:24.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:24.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.906 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.906 00:10:24.906 Run status group 0 (all jobs): 00:10:24.906 READ: bw=92.5MiB/s (97.0MB/s), 15.9MiB/s-32.1MiB/s (16.6MB/s-33.7MB/s), io=93.2MiB (97.7MB), run=1003-1008msec 00:10:24.906 WRITE: bw=96.8MiB/s (101MB/s), 17.4MiB/s-33.9MiB/s (18.3MB/s-35.5MB/s), io=97.6MiB (102MB), run=1003-1008msec 00:10:24.906 00:10:24.906 Disk stats (read/write): 00:10:24.906 nvme0n1: ios=5381/5632, merge=0/0, ticks=55570/48359, in_queue=103929, util=90.68% 00:10:24.906 nvme0n2: ios=7105/7168, merge=0/0, ticks=47000/41258, in_queue=88258, util=93.19% 00:10:24.906 nvme0n3: ios=4096/4119, merge=0/0, ticks=50792/52575, in_queue=103367, util=88.71% 00:10:24.906 nvme0n4: ios=3894/4096, merge=0/0, ticks=42515/58520, in_queue=101035, util=92.72% 00:10:24.906 16:35:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:24.906 [global] 00:10:24.906 thread=1 00:10:24.906 invalidate=1 00:10:24.906 rw=randwrite 00:10:24.906 time_based=1 00:10:24.906 runtime=1 00:10:24.906 ioengine=libaio 00:10:24.906 direct=1 00:10:24.906 bs=4096 00:10:24.906 iodepth=128 00:10:24.906 norandommap=0 00:10:24.906 numjobs=1 00:10:24.906 00:10:24.906 verify_dump=1 00:10:24.906 verify_backlog=512 00:10:24.906 verify_state_save=0 00:10:24.906 do_verify=1 00:10:24.906 verify=crc32c-intel 00:10:24.906 [job0] 00:10:24.906 filename=/dev/nvme0n1 00:10:24.906 [job1] 00:10:24.906 filename=/dev/nvme0n2 00:10:24.906 [job2] 00:10:24.906 filename=/dev/nvme0n3 00:10:24.906 [job3] 00:10:24.906 filename=/dev/nvme0n4 00:10:24.906 Could not set queue depth (nvme0n1) 00:10:24.906 Could not set queue depth (nvme0n2) 00:10:24.906 Could not set queue depth (nvme0n3) 00:10:24.906 Could not set queue depth (nvme0n4) 00:10:25.166 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.166 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.166 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.166 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.166 fio-3.35 00:10:25.166 Starting 4 threads 00:10:26.609 00:10:26.609 job0: (groupid=0, jobs=1): err= 0: pid=2568979: Tue Oct 1 16:35:17 2024 00:10:26.609 read: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1003msec) 00:10:26.609 slat (nsec): min=1254, max=9778.1k, avg=91631.86, stdev=683653.41 00:10:26.609 clat (usec): min=1637, max=20267, avg=11347.30, stdev=2603.24 00:10:26.609 lat (usec): min=3827, max=20283, avg=11438.93, stdev=2643.61 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 4686], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10159], 00:10:26.609 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:26.609 | 70.00th=[11469], 80.00th=[12911], 90.00th=[15533], 95.00th=[16909], 00:10:26.609 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:10:26.609 | 99.99th=[20317] 00:10:26.609 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:26.609 slat (usec): min=2, max=11167, avg=67.22, stdev=361.04 00:10:26.609 clat (usec): min=729, max=20146, avg=9617.47, stdev=2213.81 00:10:26.609 lat (usec): min=737, max=25080, avg=9684.69, stdev=2243.52 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 3359], 5.00th=[ 4948], 10.00th=[ 6063], 20.00th=[ 7963], 00:10:26.609 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:10:26.609 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10814], 95.00th=[11207], 00:10:26.609 | 99.00th=[14877], 99.50th=[14877], 99.90th=[19530], 99.95th=[19792], 00:10:26.609 | 99.99th=[20055] 00:10:26.609 bw ( KiB/s): min=24576, max=24576, per=22.82%, avg=24576.00, stdev= 0.00, samples=2 00:10:26.609 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:26.609 lat (usec) : 750=0.02%, 1000=0.01% 00:10:26.609 lat (msec) : 2=0.06%, 4=1.13%, 10=25.63%, 20=73.10%, 50=0.07% 00:10:26.609 cpu : usr=4.69%, sys=5.69%, ctx=684, majf=0, minf=1 00:10:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.609 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.609 job1: (groupid=0, jobs=1): err= 0: pid=2568980: Tue Oct 1 16:35:17 2024 00:10:26.609 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:10:26.609 slat (nsec): min=1225, max=7705.4k, avg=70961.18, stdev=510291.76 00:10:26.609 clat (usec): min=2090, max=17334, avg=8766.81, stdev=2144.19 00:10:26.609 lat (usec): min=2095, max=17337, avg=8837.78, stdev=2173.75 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 3556], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7373], 00:10:26.609 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:26.609 | 70.00th=[ 8979], 80.00th=[10552], 90.00th=[11994], 95.00th=[13304], 00:10:26.609 | 99.00th=[14615], 99.50th=[14877], 99.90th=[17433], 99.95th=[17433], 00:10:26.609 | 99.99th=[17433] 00:10:26.609 write: IOPS=8133, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:10:26.609 slat (usec): min=2, max=6481, avg=50.64, stdev=197.28 00:10:26.609 clat (usec): min=1152, max=17335, avg=7373.40, stdev=1633.79 00:10:26.609 lat (usec): min=1163, max=17339, avg=7424.04, stdev=1647.79 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 2606], 5.00th=[ 3785], 10.00th=[ 4948], 20.00th=[ 6456], 00:10:26.609 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 7898], 60.00th=[ 8029], 00:10:26.609 | 70.00th=[ 8094], 80.00th=[ 8160], 90.00th=[ 8291], 95.00th=[ 8455], 00:10:26.609 | 99.00th=[11207], 99.50th=[13435], 99.90th=[15008], 99.95th=[17433], 00:10:26.609 | 99.99th=[17433] 00:10:26.609 bw ( KiB/s): min=31736, max=32768, per=29.95%, avg=32252.00, stdev=729.73, samples=2 00:10:26.609 iops : min= 7934, max= 8192, avg=8063.00, stdev=182.43, samples=2 00:10:26.609 lat (msec) : 2=0.22%, 4=3.69%, 10=83.41%, 20=12.68% 00:10:26.609 cpu : usr=4.57%, sys=8.35%, ctx=1049, majf=0, minf=2 00:10:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.609 issued rwts: total=7680,8190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.609 job2: (groupid=0, jobs=1): err= 0: pid=2568984: Tue Oct 1 16:35:17 2024 00:10:26.609 read: IOPS=6034, BW=23.6MiB/s (24.7MB/s)(23.6MiB/1003msec) 00:10:26.609 slat (nsec): min=1278, max=3857.8k, avg=84501.09, stdev=454513.75 00:10:26.609 clat (usec): min=1123, max=15222, avg=10613.75, stdev=999.33 00:10:26.609 lat (usec): min=3791, max=15249, avg=10698.25, stdev=1068.96 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:10:26.609 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:10:26.609 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:10:26.609 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14353], 99.95th=[14484], 00:10:26.609 | 99.99th=[15270] 00:10:26.609 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:26.609 slat (usec): min=2, max=3630, avg=74.81, stdev=352.56 00:10:26.609 clat (usec): min=6875, max=15034, avg=10156.53, stdev=898.63 00:10:26.609 lat (usec): min=6883, max=15059, avg=10231.34, stdev=940.75 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:10:26.609 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:10:26.609 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11731], 00:10:26.609 | 99.00th=[13173], 99.50th=[13566], 99.90th=[13960], 99.95th=[14353], 00:10:26.609 | 99.99th=[15008] 00:10:26.609 bw ( KiB/s): min=24576, max=24576, per=22.82%, avg=24576.00, stdev= 0.00, samples=2 00:10:26.609 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:26.609 lat (msec) : 2=0.01%, 4=0.20%, 10=27.46%, 20=72.33% 00:10:26.609 cpu : usr=3.79%, sys=6.09%, ctx=702, majf=0, minf=1 00:10:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.609 issued rwts: total=6053,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.609 job3: (groupid=0, jobs=1): err= 0: pid=2568985: Tue Oct 1 16:35:17 2024 00:10:26.609 read: IOPS=6140, BW=24.0MiB/s (25.2MB/s)(24.2MiB/1008msec) 00:10:26.609 slat (nsec): min=1255, max=9055.6k, avg=86186.41, stdev=655090.62 00:10:26.609 clat (usec): min=3991, max=18905, avg=10648.14, stdev=2526.36 00:10:26.609 lat (usec): min=3996, max=18930, avg=10734.33, stdev=2568.59 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 4817], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 9110], 00:10:26.609 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:10:26.609 | 70.00th=[11076], 80.00th=[12649], 90.00th=[14615], 95.00th=[16057], 00:10:26.609 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:10:26.609 | 99.99th=[19006] 00:10:26.609 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:10:26.609 slat (usec): min=2, max=19237, avg=65.32, stdev=354.01 00:10:26.609 clat (usec): min=2930, max=32619, avg=9312.23, stdev=2541.21 00:10:26.609 lat (usec): min=2938, max=32635, avg=9377.55, stdev=2564.21 00:10:26.609 clat percentiles (usec): 00:10:26.609 | 1.00th=[ 3195], 5.00th=[ 4817], 10.00th=[ 5997], 20.00th=[ 8455], 00:10:26.609 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:10:26.609 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10290], 95.00th=[13042], 00:10:26.609 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21103], 99.95th=[21103], 00:10:26.609 | 99.99th=[32637] 00:10:26.609 bw ( KiB/s): min=25264, max=27336, per=24.43%, avg=26300.00, stdev=1465.13, samples=2 00:10:26.609 iops : min= 6316, max= 6834, avg=6575.00, stdev=366.28, samples=2 00:10:26.609 lat (msec) : 4=1.22%, 10=64.68%, 20=33.11%, 50=0.99% 00:10:26.609 cpu : usr=3.28%, sys=7.25%, ctx=847, majf=0, minf=1 00:10:26.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:26.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.609 issued rwts: total=6190,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.609 00:10:26.609 Run status group 0 (all jobs): 00:10:26.609 READ: bw=100MiB/s (105MB/s), 23.4MiB/s-29.8MiB/s (24.5MB/s-31.2MB/s), io=101MiB (106MB), run=1003-1008msec 00:10:26.609 WRITE: bw=105MiB/s (110MB/s), 23.9MiB/s-31.8MiB/s (25.1MB/s-33.3MB/s), io=106MiB (111MB), run=1003-1008msec 00:10:26.609 00:10:26.609 Disk stats (read/write): 00:10:26.609 nvme0n1: ios=5003/5120, merge=0/0, ticks=53951/47722, in_queue=101673, util=87.37% 00:10:26.609 nvme0n2: ios=6498/6656, merge=0/0, ticks=54341/47633, in_queue=101974, util=87.36% 00:10:26.609 nvme0n3: ios=5027/5120, merge=0/0, ticks=18219/15746, in_queue=33965, util=97.78% 00:10:26.609 nvme0n4: ios=5120/5439, merge=0/0, ticks=52465/49420, in_queue=101885, util=89.41% 00:10:26.609 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:26.609 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2569067 00:10:26.609 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:26.609 16:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:26.609 [global] 00:10:26.609 thread=1 00:10:26.609 invalidate=1 00:10:26.609 rw=read 00:10:26.609 time_based=1 00:10:26.609 runtime=10 00:10:26.609 ioengine=libaio 00:10:26.609 direct=1 00:10:26.609 bs=4096 00:10:26.609 iodepth=1 00:10:26.609 norandommap=1 00:10:26.609 numjobs=1 00:10:26.609 00:10:26.609 [job0] 00:10:26.609 filename=/dev/nvme0n1 00:10:26.609 [job1] 00:10:26.609 filename=/dev/nvme0n2 00:10:26.609 [job2] 00:10:26.609 filename=/dev/nvme0n3 00:10:26.609 [job3] 00:10:26.609 filename=/dev/nvme0n4 00:10:26.609 Could not set queue depth (nvme0n1) 00:10:26.609 Could not set queue depth (nvme0n2) 00:10:26.609 Could not set queue depth (nvme0n3) 00:10:26.609 Could not set queue depth (nvme0n4) 00:10:26.609 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.609 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.609 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.609 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.609 fio-3.35 00:10:26.609 Starting 4 threads 00:10:29.917 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:29.917 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:29.917 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7548928, buflen=4096 00:10:29.917 fio: pid=2569459, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.917 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.917 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:29.917 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9179136, buflen=4096 00:10:29.917 fio: pid=2569458, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:29.917 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:29.917 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:29.917 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14688256, buflen=4096 00:10:29.917 fio: pid=2569455, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.180 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.180 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:30.180 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13606912, buflen=4096 00:10:30.180 fio: pid=2569456, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.180 00:10:30.180 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2569455: Tue Oct 1 16:35:21 2024 00:10:30.180 read: IOPS=1190, BW=4761KiB/s (4875kB/s)(14.0MiB/3013msec) 00:10:30.180 slat (usec): min=3, max=21690, avg=31.31, stdev=380.01 00:10:30.180 clat (usec): min=156, max=3961, avg=797.09, stdev=290.92 00:10:30.180 lat (usec): min=177, max=22692, avg=828.40, stdev=481.88 00:10:30.180 clat percentiles (usec): 00:10:30.180 | 1.00th=[ 235], 5.00th=[ 338], 10.00th=[ 388], 20.00th=[ 490], 00:10:30.180 | 30.00th=[ 594], 40.00th=[ 725], 50.00th=[ 889], 60.00th=[ 955], 00:10:30.180 | 70.00th=[ 1004], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:30.180 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 3195], 99.95th=[ 3425], 00:10:30.180 | 99.99th=[ 3949] 00:10:30.180 bw ( KiB/s): min= 3768, max= 7376, per=36.24%, avg=4934.40, stdev=1639.57, samples=5 00:10:30.180 iops : min= 942, max= 1844, avg=1233.60, stdev=409.89, samples=5 00:10:30.180 lat (usec) : 250=1.39%, 500=19.77%, 750=19.91%, 1000=27.57% 00:10:30.180 lat (msec) : 2=31.22%, 4=0.11% 00:10:30.180 cpu : usr=2.26%, sys=3.85%, ctx=3590, majf=0, minf=1 00:10:30.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 issued rwts: total=3587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.180 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2569456: Tue Oct 1 16:35:21 2024 00:10:30.180 read: IOPS=1029, BW=4115KiB/s (4214kB/s)(13.0MiB/3229msec) 00:10:30.180 slat (usec): min=6, max=25062, avg=40.76, stdev=498.73 00:10:30.180 clat (usec): min=472, max=1833, avg=916.64, stdev=85.63 00:10:30.180 lat (usec): min=496, max=26017, avg=957.40, stdev=505.89 00:10:30.180 clat percentiles (usec): 00:10:30.180 | 1.00th=[ 619], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 865], 00:10:30.180 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 930], 60.00th=[ 947], 00:10:30.180 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 996], 95.00th=[ 1020], 00:10:30.180 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1369], 00:10:30.180 | 99.99th=[ 1827] 00:10:30.180 bw ( KiB/s): min= 4000, max= 4264, per=30.46%, avg=4148.00, stdev=100.27, samples=6 00:10:30.180 iops : min= 1000, max= 1066, avg=1037.00, stdev=25.07, samples=6 00:10:30.180 lat (usec) : 500=0.09%, 750=4.94%, 1000=85.43% 00:10:30.180 lat (msec) : 2=9.51% 00:10:30.180 cpu : usr=2.23%, sys=3.47%, ctx=3328, majf=0, minf=2 00:10:30.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 issued rwts: total=3323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.180 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2569458: Tue Oct 1 16:35:21 2024 00:10:30.180 read: IOPS=785, BW=3140KiB/s (3215kB/s)(8964KiB/2855msec) 00:10:30.180 slat (nsec): min=5588, max=74617, avg=22785.17, stdev=10560.40 00:10:30.180 clat (usec): min=148, max=41763, avg=1236.41, stdev=5504.90 00:10:30.180 lat (usec): min=159, max=41790, avg=1259.19, stdev=5505.04 00:10:30.180 clat percentiles (usec): 00:10:30.180 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 202], 20.00th=[ 237], 00:10:30.180 | 30.00th=[ 265], 40.00th=[ 297], 50.00th=[ 351], 60.00th=[ 553], 00:10:30.180 | 70.00th=[ 701], 80.00th=[ 783], 90.00th=[ 889], 95.00th=[ 955], 00:10:30.180 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:10:30.180 | 99.99th=[41681] 00:10:30.180 bw ( KiB/s): min= 96, max= 5800, per=13.12%, avg=1787.20, stdev=2537.93, samples=5 00:10:30.180 iops : min= 24, max= 1450, avg=446.80, stdev=634.48, samples=5 00:10:30.180 lat (usec) : 250=24.31%, 500=34.79%, 750=17.13%, 1000=20.38% 00:10:30.180 lat (msec) : 2=1.43%, 10=0.04%, 50=1.87% 00:10:30.180 cpu : usr=1.09%, sys=2.66%, ctx=2243, majf=0, minf=2 00:10:30.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.180 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2569459: Tue Oct 1 16:35:21 2024 00:10:30.180 read: IOPS=695, BW=2782KiB/s (2849kB/s)(7372KiB/2650msec) 00:10:30.180 slat (nsec): min=6070, max=48397, avg=26279.71, stdev=2602.69 00:10:30.180 clat (usec): min=295, max=42735, avg=1393.10, stdev=4226.16 00:10:30.180 lat (usec): min=321, max=42763, avg=1419.38, stdev=4226.33 00:10:30.180 clat percentiles (usec): 00:10:30.180 | 1.00th=[ 619], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 906], 00:10:30.180 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:10:30.180 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1074], 00:10:30.180 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:10:30.180 | 99.99th=[42730] 00:10:30.180 bw ( KiB/s): min= 440, max= 4056, per=21.61%, avg=2942.40, stdev=1615.68, samples=5 00:10:30.180 iops : min= 110, max= 1014, avg=735.60, stdev=403.92, samples=5 00:10:30.180 lat (usec) : 500=0.27%, 750=3.42%, 1000=69.41% 00:10:30.180 lat (msec) : 2=25.76%, 50=1.08% 00:10:30.180 cpu : usr=0.64%, sys=2.30%, ctx=1846, majf=0, minf=2 00:10:30.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.180 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.180 00:10:30.180 Run status group 0 (all jobs): 00:10:30.180 READ: bw=13.3MiB/s (13.9MB/s), 2782KiB/s-4761KiB/s (2849kB/s-4875kB/s), io=42.9MiB (45.0MB), run=2650-3229msec 00:10:30.180 00:10:30.180 Disk stats (read/write): 00:10:30.180 nvme0n1: ios=3450/0, merge=0/0, ticks=2490/0, in_queue=2490, util=94.66% 00:10:30.180 nvme0n2: ios=3204/0, merge=0/0, ticks=2610/0, in_queue=2610, util=94.74% 00:10:30.180 nvme0n3: ios=2242/0, merge=0/0, ticks=2692/0, in_queue=2692, util=96.34% 00:10:30.180 nvme0n4: ios=1882/0, merge=0/0, ticks=3415/0, in_queue=3415, util=99.78% 00:10:30.180 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.180 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:30.440 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.440 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:30.699 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.699 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:30.958 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.958 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2569067 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:31.218 nvmf hotplug test: fio failed as expected 00:10:31.218 16:35:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.478 rmmod nvme_tcp 00:10:31.478 rmmod nvme_fabrics 00:10:31.478 rmmod nvme_keyring 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2566061 ']' 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2566061 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2566061 ']' 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2566061 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2566061 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2566061' 00:10:31.478 killing process with pid 2566061 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2566061 00:10:31.478 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2566061 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.738 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.277 00:10:34.277 real 0m28.725s 00:10:34.277 user 2m10.470s 00:10:34.277 sys 0m9.005s 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 ************************************ 00:10:34.277 END TEST nvmf_fio_target 00:10:34.277 ************************************ 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 ************************************ 00:10:34.277 START TEST nvmf_bdevio 00:10:34.277 ************************************ 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.277 * Looking for test storage... 00:10:34.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.277 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.278 --rc genhtml_branch_coverage=1 00:10:34.278 --rc genhtml_function_coverage=1 00:10:34.278 --rc genhtml_legend=1 00:10:34.278 --rc geninfo_all_blocks=1 00:10:34.278 --rc geninfo_unexecuted_blocks=1 00:10:34.278 00:10:34.278 ' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.278 --rc genhtml_branch_coverage=1 00:10:34.278 --rc genhtml_function_coverage=1 00:10:34.278 --rc genhtml_legend=1 00:10:34.278 --rc geninfo_all_blocks=1 00:10:34.278 --rc geninfo_unexecuted_blocks=1 00:10:34.278 00:10:34.278 ' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.278 --rc genhtml_branch_coverage=1 00:10:34.278 --rc genhtml_function_coverage=1 00:10:34.278 --rc genhtml_legend=1 00:10:34.278 --rc geninfo_all_blocks=1 00:10:34.278 --rc geninfo_unexecuted_blocks=1 00:10:34.278 00:10:34.278 ' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.278 --rc genhtml_branch_coverage=1 00:10:34.278 --rc genhtml_function_coverage=1 00:10:34.278 --rc genhtml_legend=1 00:10:34.278 --rc geninfo_all_blocks=1 00:10:34.278 --rc geninfo_unexecuted_blocks=1 00:10:34.278 00:10:34.278 ' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.278 16:35:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.852 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.853 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.853 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:10:40.853 00:10:40.853 --- 10.0.0.2 ping statistics --- 00:10:40.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.853 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:40.853 00:10:40.853 --- 10.0.0.1 ping statistics --- 00:10:40.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.853 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2574099 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2574099 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2574099 ']' 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.853 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.854 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.854 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.854 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:40.854 [2024-10-01 16:35:32.520168] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:40.854 [2024-10-01 16:35:32.520224] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.113 [2024-10-01 16:35:32.581260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.114 [2024-10-01 16:35:32.646476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.114 [2024-10-01 16:35:32.646511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.114 [2024-10-01 16:35:32.646521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.114 [2024-10-01 16:35:32.646526] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.114 [2024-10-01 16:35:32.646531] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.114 [2024-10-01 16:35:32.646635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.114 [2024-10-01 16:35:32.646789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.114 [2024-10-01 16:35:32.646941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.114 [2024-10-01 16:35:32.646943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.114 [2024-10-01 16:35:32.787507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.114 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 Malloc0 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 [2024-10-01 16:35:32.826353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:41.374 { 00:10:41.374 "params": { 00:10:41.374 "name": "Nvme$subsystem", 00:10:41.374 "trtype": "$TEST_TRANSPORT", 00:10:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.374 "adrfam": "ipv4", 00:10:41.374 "trsvcid": "$NVMF_PORT", 00:10:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.374 "hdgst": ${hdgst:-false}, 00:10:41.374 "ddgst": ${ddgst:-false} 00:10:41.374 }, 00:10:41.374 "method": "bdev_nvme_attach_controller" 00:10:41.374 } 00:10:41.374 EOF 00:10:41.374 )") 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:41.374 16:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:41.374 "params": { 00:10:41.374 "name": "Nvme1", 00:10:41.374 "trtype": "tcp", 00:10:41.374 "traddr": "10.0.0.2", 00:10:41.374 "adrfam": "ipv4", 00:10:41.374 "trsvcid": "4420", 00:10:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.374 "hdgst": false, 00:10:41.374 "ddgst": false 00:10:41.374 }, 00:10:41.374 "method": "bdev_nvme_attach_controller" 00:10:41.374 }' 00:10:41.374 [2024-10-01 16:35:32.877128] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:10:41.374 [2024-10-01 16:35:32.877175] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574300 ] 00:10:41.374 [2024-10-01 16:35:32.953622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.374 [2024-10-01 16:35:33.017989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.374 [2024-10-01 16:35:33.018127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.374 [2024-10-01 16:35:33.018130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.988 I/O targets: 00:10:41.988 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:41.988 00:10:41.988 00:10:41.988 CUnit - A unit testing framework for C - Version 2.1-3 00:10:41.988 http://cunit.sourceforge.net/ 00:10:41.988 00:10:41.988 00:10:41.988 Suite: bdevio tests on: Nvme1n1 00:10:41.988 Test: blockdev write read block ...passed 00:10:41.988 Test: blockdev write zeroes read block ...passed 00:10:41.988 Test: blockdev write zeroes read no split ...passed 00:10:41.988 Test: blockdev write zeroes read split ...passed 00:10:41.988 Test: blockdev write zeroes read split partial ...passed 00:10:41.988 Test: blockdev reset ...[2024-10-01 16:35:33.428469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:41.988 [2024-10-01 16:35:33.428526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf17090 (9): Bad file descriptor 00:10:41.988 [2024-10-01 16:35:33.448178] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:41.988 passed 00:10:41.988 Test: blockdev write read 8 blocks ...passed 00:10:41.988 Test: blockdev write read size > 128k ...passed 00:10:41.988 Test: blockdev write read invalid size ...passed 00:10:41.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.988 Test: blockdev write read max offset ...passed 00:10:42.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:42.282 Test: blockdev writev readv 8 blocks ...passed 00:10:42.282 Test: blockdev writev readv 30 x 1block ...passed 00:10:42.282 Test: blockdev writev readv block ...passed 00:10:42.282 Test: blockdev writev readv size > 128k ...passed 00:10:42.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:42.282 Test: blockdev comparev and writev ...[2024-10-01 16:35:33.826162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.826483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.826501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.826810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.826828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.826834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.827100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.827109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.827118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:42.282 [2024-10-01 16:35:33.827124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:42.282 passed 00:10:42.282 Test: blockdev nvme passthru rw ...passed 00:10:42.282 Test: blockdev nvme passthru vendor specific ...[2024-10-01 16:35:33.911344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:42.282 [2024-10-01 16:35:33.911355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.911560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:42.282 [2024-10-01 16:35:33.911568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.911755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:42.282 [2024-10-01 16:35:33.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:42.282 [2024-10-01 16:35:33.911982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:42.282 [2024-10-01 16:35:33.911989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:42.282 passed 00:10:42.282 Test: blockdev nvme admin passthru ...passed 00:10:42.562 Test: blockdev copy ...passed 00:10:42.562 00:10:42.562 Run Summary: Type Total Ran Passed Failed Inactive 00:10:42.562 suites 1 1 n/a 0 0 00:10:42.562 tests 23 23 23 0 0 00:10:42.562 asserts 152 152 152 0 n/a 00:10:42.562 00:10:42.562 Elapsed time = 1.301 seconds 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.562 rmmod nvme_tcp 00:10:42.562 rmmod nvme_fabrics 00:10:42.562 rmmod nvme_keyring 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2574099 ']' 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2574099 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2574099 ']' 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2574099 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2574099 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2574099' 00:10:42.562 killing process with pid 2574099 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2574099 00:10:42.562 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2574099 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.821 16:35:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.360 00:10:45.360 real 0m10.984s 00:10:45.360 user 0m11.177s 00:10:45.360 sys 0m5.616s 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.360 ************************************ 00:10:45.360 END TEST nvmf_bdevio 00:10:45.360 ************************************ 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:45.360 00:10:45.360 real 4m58.165s 00:10:45.360 user 11m12.462s 00:10:45.360 sys 1m43.306s 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.360 16:35:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.360 ************************************ 00:10:45.361 END TEST nvmf_target_core 00:10:45.361 ************************************ 00:10:45.361 16:35:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.361 16:35:36 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.361 16:35:36 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.361 16:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:45.361 ************************************ 00:10:45.361 START TEST nvmf_target_extra 00:10:45.361 ************************************ 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:45.361 * Looking for test storage... 00:10:45.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.361 --rc genhtml_branch_coverage=1 00:10:45.361 --rc genhtml_function_coverage=1 00:10:45.361 --rc genhtml_legend=1 00:10:45.361 --rc geninfo_all_blocks=1 00:10:45.361 --rc geninfo_unexecuted_blocks=1 00:10:45.361 00:10:45.361 ' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.361 --rc genhtml_branch_coverage=1 00:10:45.361 --rc genhtml_function_coverage=1 00:10:45.361 --rc genhtml_legend=1 00:10:45.361 --rc geninfo_all_blocks=1 00:10:45.361 --rc geninfo_unexecuted_blocks=1 00:10:45.361 00:10:45.361 ' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.361 --rc genhtml_branch_coverage=1 00:10:45.361 --rc genhtml_function_coverage=1 00:10:45.361 --rc genhtml_legend=1 00:10:45.361 --rc geninfo_all_blocks=1 00:10:45.361 --rc geninfo_unexecuted_blocks=1 00:10:45.361 00:10:45.361 ' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.361 --rc genhtml_branch_coverage=1 00:10:45.361 --rc genhtml_function_coverage=1 00:10:45.361 --rc genhtml_legend=1 00:10:45.361 --rc geninfo_all_blocks=1 00:10:45.361 --rc geninfo_unexecuted_blocks=1 00:10:45.361 00:10:45.361 ' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.361 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.362 ************************************ 00:10:45.362 START TEST nvmf_example 00:10:45.362 ************************************ 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:45.362 * Looking for test storage... 00:10:45.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:45.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.362 --rc genhtml_branch_coverage=1 00:10:45.362 --rc genhtml_function_coverage=1 00:10:45.362 --rc genhtml_legend=1 00:10:45.362 --rc geninfo_all_blocks=1 00:10:45.362 --rc geninfo_unexecuted_blocks=1 00:10:45.362 00:10:45.362 ' 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.362 16:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.362 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.363 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:53.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:53.493 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:53.493 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:53.493 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.493 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:10:53.493 00:10:53.493 --- 10.0.0.2 ping statistics --- 00:10:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.493 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:10:53.493 00:10:53.493 --- 10.0.0.1 ping statistics --- 00:10:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.493 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.493 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2578623 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2578623 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2578623 ']' 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.494 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:53.753 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.963 Initializing NVMe Controllers 00:11:05.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:05.963 Initialization complete. Launching workers. 00:11:05.963 ======================================================== 00:11:05.963 Latency(us) 00:11:05.963 Device Information : IOPS MiB/s Average min max 00:11:05.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19372.57 75.67 3303.49 580.80 41195.53 00:11:05.963 ======================================================== 00:11:05.963 Total : 19372.57 75.67 3303.49 580.80 41195.53 00:11:05.963 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.963 rmmod nvme_tcp 00:11:05.963 rmmod nvme_fabrics 00:11:05.963 rmmod nvme_keyring 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2578623 ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2578623 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2578623 ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2578623 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2578623 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2578623' 00:11:05.963 killing process with pid 2578623 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2578623 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2578623 00:11:05.963 nvmf threads initialize successfully 00:11:05.963 bdev subsystem init successfully 00:11:05.963 created a nvmf target service 00:11:05.963 create targets's poll groups done 00:11:05.963 all subsystems of target started 00:11:05.963 nvmf target is running 00:11:05.963 all subsystems of target stopped 00:11:05.963 destroy targets's poll groups done 00:11:05.963 destroyed the nvmf target service 00:11:05.963 bdev subsystem finish successfully 00:11:05.963 nvmf threads destroy successfully 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:05.963 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.964 16:35:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.532 16:35:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.532 16:35:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:06.532 16:35:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.532 16:35:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.532 00:11:06.532 real 0m21.206s 00:11:06.532 user 0m47.194s 00:11:06.532 sys 0m6.616s 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.532 ************************************ 00:11:06.532 END TEST nvmf_example 00:11:06.532 ************************************ 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.532 ************************************ 00:11:06.532 START TEST nvmf_filesystem 00:11:06.532 ************************************ 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:06.532 * Looking for test storage... 00:11:06.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:06.532 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:06.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.796 --rc genhtml_branch_coverage=1 00:11:06.796 --rc genhtml_function_coverage=1 00:11:06.796 --rc genhtml_legend=1 00:11:06.796 --rc geninfo_all_blocks=1 00:11:06.796 --rc geninfo_unexecuted_blocks=1 00:11:06.796 00:11:06.796 ' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:06.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.796 --rc genhtml_branch_coverage=1 00:11:06.796 --rc genhtml_function_coverage=1 00:11:06.796 --rc genhtml_legend=1 00:11:06.796 --rc geninfo_all_blocks=1 00:11:06.796 --rc geninfo_unexecuted_blocks=1 00:11:06.796 00:11:06.796 ' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:06.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.796 --rc genhtml_branch_coverage=1 00:11:06.796 --rc genhtml_function_coverage=1 00:11:06.796 --rc genhtml_legend=1 00:11:06.796 --rc geninfo_all_blocks=1 00:11:06.796 --rc geninfo_unexecuted_blocks=1 00:11:06.796 00:11:06.796 ' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:06.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.796 --rc genhtml_branch_coverage=1 00:11:06.796 --rc genhtml_function_coverage=1 00:11:06.796 --rc genhtml_legend=1 00:11:06.796 --rc geninfo_all_blocks=1 00:11:06.796 --rc geninfo_unexecuted_blocks=1 00:11:06.796 00:11:06.796 ' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:06.796 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:06.797 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:06.798 #define SPDK_CONFIG_H 00:11:06.798 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:06.798 #define SPDK_CONFIG_APPS 1 00:11:06.798 #define SPDK_CONFIG_ARCH native 00:11:06.798 #undef SPDK_CONFIG_ASAN 00:11:06.798 #undef SPDK_CONFIG_AVAHI 00:11:06.798 #undef SPDK_CONFIG_CET 00:11:06.798 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:06.798 #define SPDK_CONFIG_COVERAGE 1 00:11:06.798 #define SPDK_CONFIG_CROSS_PREFIX 00:11:06.798 #undef SPDK_CONFIG_CRYPTO 00:11:06.798 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:06.798 #undef SPDK_CONFIG_CUSTOMOCF 00:11:06.798 #undef SPDK_CONFIG_DAOS 00:11:06.798 #define SPDK_CONFIG_DAOS_DIR 00:11:06.798 #define SPDK_CONFIG_DEBUG 1 00:11:06.798 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:06.798 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:06.798 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:06.798 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:06.798 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:06.798 #undef SPDK_CONFIG_DPDK_UADK 00:11:06.798 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:06.798 #define SPDK_CONFIG_EXAMPLES 1 00:11:06.798 #undef SPDK_CONFIG_FC 00:11:06.798 #define SPDK_CONFIG_FC_PATH 00:11:06.798 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:06.798 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:06.798 #define SPDK_CONFIG_FSDEV 1 00:11:06.798 #undef SPDK_CONFIG_FUSE 00:11:06.798 #undef SPDK_CONFIG_FUZZER 00:11:06.798 #define SPDK_CONFIG_FUZZER_LIB 00:11:06.798 #undef SPDK_CONFIG_GOLANG 00:11:06.798 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:06.798 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:06.798 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:06.798 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:06.798 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:06.798 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:06.798 #undef SPDK_CONFIG_HAVE_LZ4 00:11:06.798 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:06.798 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:06.798 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:06.798 #define SPDK_CONFIG_IDXD 1 00:11:06.798 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:06.798 #undef SPDK_CONFIG_IPSEC_MB 00:11:06.798 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:06.798 #define SPDK_CONFIG_ISAL 1 00:11:06.798 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:06.798 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:06.798 #define SPDK_CONFIG_LIBDIR 00:11:06.798 #undef SPDK_CONFIG_LTO 00:11:06.798 #define SPDK_CONFIG_MAX_LCORES 128 00:11:06.798 #define SPDK_CONFIG_NVME_CUSE 1 00:11:06.798 #undef SPDK_CONFIG_OCF 00:11:06.798 #define SPDK_CONFIG_OCF_PATH 00:11:06.798 #define SPDK_CONFIG_OPENSSL_PATH 00:11:06.798 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:06.798 #define SPDK_CONFIG_PGO_DIR 00:11:06.798 #undef SPDK_CONFIG_PGO_USE 00:11:06.798 #define SPDK_CONFIG_PREFIX /usr/local 00:11:06.798 #undef SPDK_CONFIG_RAID5F 00:11:06.798 #undef SPDK_CONFIG_RBD 00:11:06.798 #define SPDK_CONFIG_RDMA 1 00:11:06.798 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:06.798 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:06.798 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:06.798 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:06.798 #define SPDK_CONFIG_SHARED 1 00:11:06.798 #undef SPDK_CONFIG_SMA 00:11:06.798 #define SPDK_CONFIG_TESTS 1 00:11:06.798 #undef SPDK_CONFIG_TSAN 00:11:06.798 #define SPDK_CONFIG_UBLK 1 00:11:06.798 #define SPDK_CONFIG_UBSAN 1 00:11:06.798 #undef SPDK_CONFIG_UNIT_TESTS 00:11:06.798 #undef SPDK_CONFIG_URING 00:11:06.798 #define SPDK_CONFIG_URING_PATH 00:11:06.798 #undef SPDK_CONFIG_URING_ZNS 00:11:06.798 #undef SPDK_CONFIG_USDT 00:11:06.798 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:06.798 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:06.798 #define SPDK_CONFIG_VFIO_USER 1 00:11:06.798 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:06.798 #define SPDK_CONFIG_VHOST 1 00:11:06.798 #define SPDK_CONFIG_VIRTIO 1 00:11:06.798 #undef SPDK_CONFIG_VTUNE 00:11:06.798 #define SPDK_CONFIG_VTUNE_DIR 00:11:06.798 #define SPDK_CONFIG_WERROR 1 00:11:06.798 #define SPDK_CONFIG_WPDK_DIR 00:11:06.798 #undef SPDK_CONFIG_XNVME 00:11:06.798 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.798 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:06.799 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:06.800 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:06.801 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j128 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2581049 ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2581049 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.fbJI1B 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fbJI1B/tests/target /tmp/spdk.fbJI1B 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=678510592 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4605919232 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123616837632 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129363156992 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5746319360 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64671547392 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64681578496 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25849290752 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25872633856 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23343104 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=335872 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=167936 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64681115648 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64681578496 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=462848 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:06.802 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12936302592 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12936314880 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:06.803 * Looking for test storage... 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123616837632 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=7960911872 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:06.803 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.064 --rc genhtml_branch_coverage=1 00:11:07.064 --rc genhtml_function_coverage=1 00:11:07.064 --rc genhtml_legend=1 00:11:07.064 --rc geninfo_all_blocks=1 00:11:07.064 --rc geninfo_unexecuted_blocks=1 00:11:07.064 00:11:07.064 ' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.064 --rc genhtml_branch_coverage=1 00:11:07.064 --rc genhtml_function_coverage=1 00:11:07.064 --rc genhtml_legend=1 00:11:07.064 --rc geninfo_all_blocks=1 00:11:07.064 --rc geninfo_unexecuted_blocks=1 00:11:07.064 00:11:07.064 ' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.064 --rc genhtml_branch_coverage=1 00:11:07.064 --rc genhtml_function_coverage=1 00:11:07.064 --rc genhtml_legend=1 00:11:07.064 --rc geninfo_all_blocks=1 00:11:07.064 --rc geninfo_unexecuted_blocks=1 00:11:07.064 00:11:07.064 ' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:07.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.064 --rc genhtml_branch_coverage=1 00:11:07.064 --rc genhtml_function_coverage=1 00:11:07.064 --rc genhtml_legend=1 00:11:07.064 --rc geninfo_all_blocks=1 00:11:07.064 --rc geninfo_unexecuted_blocks=1 00:11:07.064 00:11:07.064 ' 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.064 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.065 16:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:15.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:15.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:15.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:15.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.224 16:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.224 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:11:15.225 00:11:15.225 --- 10.0.0.2 ping statistics --- 00:11:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.225 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:11:15.225 00:11:15.225 --- 10.0.0.1 ping statistics --- 00:11:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.225 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.225 ************************************ 00:11:15.225 START TEST nvmf_filesystem_no_in_capsule 00:11:15.225 ************************************ 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2584851 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2584851 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2584851 ']' 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.225 16:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.225 [2024-10-01 16:36:06.279790] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:15.225 [2024-10-01 16:36:06.279851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.225 [2024-10-01 16:36:06.366832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.225 [2024-10-01 16:36:06.460360] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.225 [2024-10-01 16:36:06.460432] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.225 [2024-10-01 16:36:06.460441] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.225 [2024-10-01 16:36:06.460448] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.225 [2024-10-01 16:36:06.460453] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.225 [2024-10-01 16:36:06.460585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.225 [2024-10-01 16:36:06.460710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.225 [2024-10-01 16:36:06.460843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.225 [2024-10-01 16:36:06.460846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 [2024-10-01 16:36:07.221225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 [2024-10-01 16:36:07.327674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:15.796 { 00:11:15.796 "name": "Malloc1", 00:11:15.796 "aliases": [ 00:11:15.796 "b58e56f7-e3a2-4790-99f4-3572da48fb2a" 00:11:15.796 ], 00:11:15.796 "product_name": "Malloc disk", 00:11:15.796 "block_size": 512, 00:11:15.796 "num_blocks": 1048576, 00:11:15.796 "uuid": "b58e56f7-e3a2-4790-99f4-3572da48fb2a", 00:11:15.796 "assigned_rate_limits": { 00:11:15.796 "rw_ios_per_sec": 0, 00:11:15.796 "rw_mbytes_per_sec": 0, 00:11:15.796 "r_mbytes_per_sec": 0, 00:11:15.796 "w_mbytes_per_sec": 0 00:11:15.796 }, 00:11:15.796 "claimed": true, 00:11:15.796 "claim_type": "exclusive_write", 00:11:15.796 "zoned": false, 00:11:15.796 "supported_io_types": { 00:11:15.796 "read": true, 00:11:15.796 "write": true, 00:11:15.796 "unmap": true, 00:11:15.796 "flush": true, 00:11:15.796 "reset": true, 00:11:15.796 "nvme_admin": false, 00:11:15.796 "nvme_io": false, 00:11:15.796 "nvme_io_md": false, 00:11:15.796 "write_zeroes": true, 00:11:15.796 "zcopy": true, 00:11:15.796 "get_zone_info": false, 00:11:15.796 "zone_management": false, 00:11:15.796 "zone_append": false, 00:11:15.796 "compare": false, 00:11:15.796 "compare_and_write": false, 00:11:15.796 "abort": true, 00:11:15.796 "seek_hole": false, 00:11:15.796 "seek_data": false, 00:11:15.796 "copy": true, 00:11:15.796 "nvme_iov_md": false 00:11:15.796 }, 00:11:15.796 "memory_domains": [ 00:11:15.796 { 00:11:15.796 "dma_device_id": "system", 00:11:15.796 "dma_device_type": 1 00:11:15.796 }, 00:11:15.796 { 00:11:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.796 "dma_device_type": 2 00:11:15.796 } 00:11:15.796 ], 00:11:15.796 "driver_specific": {} 00:11:15.796 } 00:11:15.796 ]' 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.796 16:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.708 16:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.708 16:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:17.708 16:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.708 16:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:17.708 16:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.616 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.206 16:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.144 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:21.144 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.144 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.145 ************************************ 00:11:21.145 START TEST filesystem_ext4 00:11:21.145 ************************************ 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:21.145 16:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.145 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.404 Discarding device blocks: 0/522240 done 00:11:21.404 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.404 Filesystem UUID: 911700a2-5b76-4e9d-b43a-e7c6a677b102 00:11:21.404 Superblock backups stored on blocks: 00:11:21.404 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.404 00:11:21.404 Allocating group tables: 0/64 done 00:11:21.404 Writing inode tables: 0/64 done 00:11:21.404 Creating journal (8192 blocks): done 00:11:22.785 Writing superblocks and filesystem accounting information: 0/64 done 00:11:22.785 00:11:22.785 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:22.785 16:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2584851 00:11:28.069 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.329 00:11:28.329 real 0m6.986s 00:11:28.329 user 0m0.023s 00:11:28.329 sys 0m0.092s 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.329 ************************************ 00:11:28.329 END TEST filesystem_ext4 00:11:28.329 ************************************ 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.329 ************************************ 00:11:28.329 START TEST filesystem_btrfs 00:11:28.329 ************************************ 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:28.329 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:28.329 btrfs-progs v6.8.1 00:11:28.329 See https://btrfs.readthedocs.io for more information. 00:11:28.329 00:11:28.329 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:28.329 NOTE: several default settings have changed in version 5.15, please make sure 00:11:28.329 this does not affect your deployments: 00:11:28.329 - DUP for metadata (-m dup) 00:11:28.329 - enabled no-holes (-O no-holes) 00:11:28.329 - enabled free-space-tree (-R free-space-tree) 00:11:28.329 00:11:28.329 Label: (null) 00:11:28.329 UUID: 597467f9-0e1f-4f4d-adc8-be91a9e071d7 00:11:28.329 Node size: 16384 00:11:28.329 Sector size: 4096 (CPU page size: 4096) 00:11:28.329 Filesystem size: 510.00MiB 00:11:28.329 Block group profiles: 00:11:28.329 Data: single 8.00MiB 00:11:28.329 Metadata: DUP 32.00MiB 00:11:28.329 System: DUP 8.00MiB 00:11:28.329 SSD detected: yes 00:11:28.329 Zoned device: no 00:11:28.329 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:28.329 Checksum: crc32c 00:11:28.329 Number of devices: 1 00:11:28.329 Devices: 00:11:28.329 ID SIZE PATH 00:11:28.329 1 510.00MiB /dev/nvme0n1p1 00:11:28.329 00:11:28.329 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:28.329 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2584851 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.900 00:11:28.900 real 0m0.650s 00:11:28.900 user 0m0.031s 00:11:28.900 sys 0m0.166s 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 ************************************ 00:11:28.900 END TEST filesystem_btrfs 00:11:28.900 ************************************ 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.900 ************************************ 00:11:28.900 START TEST filesystem_xfs 00:11:28.900 ************************************ 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:28.900 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.160 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.160 = sectsz=512 attr=2, projid32bit=1 00:11:29.160 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.160 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.160 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.160 = sunit=0 swidth=0 blks 00:11:29.160 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.160 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.160 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.160 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:30.142 Discarding blocks...Done. 00:11:30.142 16:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:30.142 16:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.437 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2584851 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.697 00:11:33.697 real 0m4.639s 00:11:33.697 user 0m0.026s 00:11:33.697 sys 0m0.133s 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.697 ************************************ 00:11:33.697 END TEST filesystem_xfs 00:11:33.697 ************************************ 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.697 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2584851 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2584851 ']' 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2584851 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.957 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2584851 00:11:34.217 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2584851' 00:11:34.218 killing process with pid 2584851 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2584851 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2584851 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:34.218 00:11:34.218 real 0m19.652s 00:11:34.218 user 1m17.546s 00:11:34.218 sys 0m1.582s 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.218 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.218 ************************************ 00:11:34.218 END TEST nvmf_filesystem_no_in_capsule 00:11:34.218 ************************************ 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.478 ************************************ 00:11:34.478 START TEST nvmf_filesystem_in_capsule 00:11:34.478 ************************************ 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2588763 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2588763 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2588763 ']' 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.478 16:36:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.478 [2024-10-01 16:36:26.005068] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:11:34.478 [2024-10-01 16:36:26.005112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.478 [2024-10-01 16:36:26.088354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.478 [2024-10-01 16:36:26.150661] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.478 [2024-10-01 16:36:26.150698] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.478 [2024-10-01 16:36:26.150706] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.478 [2024-10-01 16:36:26.150712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.478 [2024-10-01 16:36:26.150717] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.478 [2024-10-01 16:36:26.150827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.478 [2024-10-01 16:36:26.150962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.478 [2024-10-01 16:36:26.150998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.478 [2024-10-01 16:36:26.150990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 [2024-10-01 16:36:26.915103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 Malloc1 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 [2024-10-01 16:36:27.022818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.417 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:35.417 { 00:11:35.417 "name": "Malloc1", 00:11:35.417 "aliases": [ 00:11:35.417 "2166c015-5304-4467-aa84-1a8907a9ffa2" 00:11:35.417 ], 00:11:35.417 "product_name": "Malloc disk", 00:11:35.417 "block_size": 512, 00:11:35.417 "num_blocks": 1048576, 00:11:35.417 "uuid": "2166c015-5304-4467-aa84-1a8907a9ffa2", 00:11:35.417 "assigned_rate_limits": { 00:11:35.417 "rw_ios_per_sec": 0, 00:11:35.417 "rw_mbytes_per_sec": 0, 00:11:35.417 "r_mbytes_per_sec": 0, 00:11:35.417 "w_mbytes_per_sec": 0 00:11:35.417 }, 00:11:35.417 "claimed": true, 00:11:35.418 "claim_type": "exclusive_write", 00:11:35.418 "zoned": false, 00:11:35.418 "supported_io_types": { 00:11:35.418 "read": true, 00:11:35.418 "write": true, 00:11:35.418 "unmap": true, 00:11:35.418 "flush": true, 00:11:35.418 "reset": true, 00:11:35.418 "nvme_admin": false, 00:11:35.418 "nvme_io": false, 00:11:35.418 "nvme_io_md": false, 00:11:35.418 "write_zeroes": true, 00:11:35.418 "zcopy": true, 00:11:35.418 "get_zone_info": false, 00:11:35.418 "zone_management": false, 00:11:35.418 "zone_append": false, 00:11:35.418 "compare": false, 00:11:35.418 "compare_and_write": false, 00:11:35.418 "abort": true, 00:11:35.418 "seek_hole": false, 00:11:35.418 "seek_data": false, 00:11:35.418 "copy": true, 00:11:35.418 "nvme_iov_md": false 00:11:35.418 }, 00:11:35.418 "memory_domains": [ 00:11:35.418 { 00:11:35.418 "dma_device_id": "system", 00:11:35.418 "dma_device_type": 1 00:11:35.418 }, 00:11:35.418 { 00:11:35.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.418 "dma_device_type": 2 00:11:35.418 } 00:11:35.418 ], 00:11:35.418 "driver_specific": {} 00:11:35.418 } 00:11:35.418 ]' 00:11:35.418 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.677 16:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.057 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.057 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.057 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.057 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.057 16:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.596 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.596 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.596 16:36:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:40.535 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:40.535 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:40.535 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.535 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.535 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.795 ************************************ 00:11:40.795 START TEST filesystem_in_capsule_ext4 00:11:40.795 ************************************ 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:40.795 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:40.795 mke2fs 1.47.0 (5-Feb-2023) 00:11:40.795 Discarding device blocks: 0/522240 done 00:11:40.795 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:40.795 Filesystem UUID: b8a8c8e5-eef4-4380-bf1f-7859b5008e13 00:11:40.795 Superblock backups stored on blocks: 00:11:40.795 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:40.795 00:11:40.795 Allocating group tables: 0/64 done 00:11:40.795 Writing inode tables: 0/64 done 00:11:43.336 Creating journal (8192 blocks): done 00:11:45.658 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:45.658 00:11:45.658 16:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:45.658 16:36:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2588763 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.234 00:11:52.234 real 0m11.104s 00:11:52.234 user 0m0.036s 00:11:52.234 sys 0m0.077s 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 ************************************ 00:11:52.234 END TEST filesystem_in_capsule_ext4 00:11:52.234 ************************************ 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.234 ************************************ 00:11:52.234 START TEST filesystem_in_capsule_btrfs 00:11:52.234 ************************************ 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:52.234 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:52.234 btrfs-progs v6.8.1 00:11:52.234 See https://btrfs.readthedocs.io for more information. 00:11:52.234 00:11:52.235 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:52.235 NOTE: several default settings have changed in version 5.15, please make sure 00:11:52.235 this does not affect your deployments: 00:11:52.235 - DUP for metadata (-m dup) 00:11:52.235 - enabled no-holes (-O no-holes) 00:11:52.235 - enabled free-space-tree (-R free-space-tree) 00:11:52.235 00:11:52.235 Label: (null) 00:11:52.235 UUID: 4678d9fa-1ce1-4922-bab7-5cbd4a2c9bd3 00:11:52.235 Node size: 16384 00:11:52.235 Sector size: 4096 (CPU page size: 4096) 00:11:52.235 Filesystem size: 510.00MiB 00:11:52.235 Block group profiles: 00:11:52.235 Data: single 8.00MiB 00:11:52.235 Metadata: DUP 32.00MiB 00:11:52.235 System: DUP 8.00MiB 00:11:52.235 SSD detected: yes 00:11:52.235 Zoned device: no 00:11:52.235 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:52.235 Checksum: crc32c 00:11:52.235 Number of devices: 1 00:11:52.235 Devices: 00:11:52.235 ID SIZE PATH 00:11:52.235 1 510.00MiB /dev/nvme0n1p1 00:11:52.235 00:11:52.235 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:52.235 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2588763 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.809 00:11:52.809 real 0m1.032s 00:11:52.809 user 0m0.028s 00:11:52.809 sys 0m0.120s 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:52.809 ************************************ 00:11:52.809 END TEST filesystem_in_capsule_btrfs 00:11:52.809 ************************************ 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.809 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.107 ************************************ 00:11:53.107 START TEST filesystem_in_capsule_xfs 00:11:53.107 ************************************ 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:53.107 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.107 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.107 = sectsz=512 attr=2, projid32bit=1 00:11:53.107 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.107 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.107 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.107 = sunit=0 swidth=0 blks 00:11:53.107 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.107 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.107 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.107 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.126 Discarding blocks...Done. 00:11:54.126 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:54.126 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2588763 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.036 00:11:56.036 real 0m3.159s 00:11:56.036 user 0m0.028s 00:11:56.036 sys 0m0.076s 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.036 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.036 ************************************ 00:11:56.036 END TEST filesystem_in_capsule_xfs 00:11:56.036 ************************************ 00:11:56.296 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:56.296 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:56.296 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.296 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2588763 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2588763 ']' 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2588763 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2588763 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2588763' 00:11:56.297 killing process with pid 2588763 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2588763 00:11:56.297 16:36:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2588763 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:56.557 00:11:56.557 real 0m22.242s 00:11:56.557 user 1m27.930s 00:11:56.557 sys 0m1.482s 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.557 ************************************ 00:11:56.557 END TEST nvmf_filesystem_in_capsule 00:11:56.557 ************************************ 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:56.557 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.558 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:56.558 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.558 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.558 rmmod nvme_tcp 00:11:56.817 rmmod nvme_fabrics 00:11:56.817 rmmod nvme_keyring 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.817 16:36:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.726 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.726 00:11:58.726 real 0m52.286s 00:11:58.726 user 2m47.820s 00:11:58.726 sys 0m9.062s 00:11:58.726 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.726 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.726 ************************************ 00:11:58.726 END TEST nvmf_filesystem 00:11:58.726 ************************************ 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.987 ************************************ 00:11:58.987 START TEST nvmf_target_discovery 00:11:58.987 ************************************ 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:58.987 * Looking for test storage... 00:11:58.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.987 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:58.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.988 --rc genhtml_branch_coverage=1 00:11:58.988 --rc genhtml_function_coverage=1 00:11:58.988 --rc genhtml_legend=1 00:11:58.988 --rc geninfo_all_blocks=1 00:11:58.988 --rc geninfo_unexecuted_blocks=1 00:11:58.988 00:11:58.988 ' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:58.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.988 --rc genhtml_branch_coverage=1 00:11:58.988 --rc genhtml_function_coverage=1 00:11:58.988 --rc genhtml_legend=1 00:11:58.988 --rc geninfo_all_blocks=1 00:11:58.988 --rc geninfo_unexecuted_blocks=1 00:11:58.988 00:11:58.988 ' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:58.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.988 --rc genhtml_branch_coverage=1 00:11:58.988 --rc genhtml_function_coverage=1 00:11:58.988 --rc genhtml_legend=1 00:11:58.988 --rc geninfo_all_blocks=1 00:11:58.988 --rc geninfo_unexecuted_blocks=1 00:11:58.988 00:11:58.988 ' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:58.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.988 --rc genhtml_branch_coverage=1 00:11:58.988 --rc genhtml_function_coverage=1 00:11:58.988 --rc genhtml_legend=1 00:11:58.988 --rc geninfo_all_blocks=1 00:11:58.988 --rc geninfo_unexecuted_blocks=1 00:11:58.988 00:11:58.988 ' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.988 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.989 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.121 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.121 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.121 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:07.121 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:12:07.122 00:12:07.122 --- 10.0.0.2 ping statistics --- 00:12:07.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.122 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:12:07.122 00:12:07.122 --- 10.0.0.1 ping statistics --- 00:12:07.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.122 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2596576 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2596576 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2596576 ']' 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.122 16:36:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 [2024-10-01 16:36:57.671531] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:07.122 [2024-10-01 16:36:57.671583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.122 [2024-10-01 16:36:57.754278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.122 [2024-10-01 16:36:57.817638] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.122 [2024-10-01 16:36:57.817675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.122 [2024-10-01 16:36:57.817682] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.122 [2024-10-01 16:36:57.817689] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.122 [2024-10-01 16:36:57.817694] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.122 [2024-10-01 16:36:57.817802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.122 [2024-10-01 16:36:57.817935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.122 [2024-10-01 16:36:57.818082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.122 [2024-10-01 16:36:57.818211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 [2024-10-01 16:36:58.563312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 Null1 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 [2024-10-01 16:36:58.632429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 Null2 00:12:07.122 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 Null3 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 Null4 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.123 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:12:07.384 00:12:07.384 Discovery Log Number of Records 6, Generation counter 6 00:12:07.384 =====Discovery Log Entry 0====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: current discovery subsystem 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4420 00:12:07.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: explicit discovery connections, duplicate discovery information 00:12:07.384 sectype: none 00:12:07.384 =====Discovery Log Entry 1====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: nvme subsystem 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4420 00:12:07.384 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: none 00:12:07.384 sectype: none 00:12:07.384 =====Discovery Log Entry 2====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: nvme subsystem 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4420 00:12:07.384 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: none 00:12:07.384 sectype: none 00:12:07.384 =====Discovery Log Entry 3====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: nvme subsystem 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4420 00:12:07.384 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: none 00:12:07.384 sectype: none 00:12:07.384 =====Discovery Log Entry 4====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: nvme subsystem 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4420 00:12:07.384 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: none 00:12:07.384 sectype: none 00:12:07.384 =====Discovery Log Entry 5====== 00:12:07.384 trtype: tcp 00:12:07.384 adrfam: ipv4 00:12:07.384 subtype: discovery subsystem referral 00:12:07.384 treq: not required 00:12:07.384 portid: 0 00:12:07.384 trsvcid: 4430 00:12:07.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:07.384 traddr: 10.0.0.2 00:12:07.384 eflags: none 00:12:07.384 sectype: none 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:07.384 Perform nvmf subsystem discovery via RPC 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.384 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.384 [ 00:12:07.384 { 00:12:07.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:07.384 "subtype": "Discovery", 00:12:07.384 "listen_addresses": [ 00:12:07.384 { 00:12:07.384 "trtype": "TCP", 00:12:07.384 "adrfam": "IPv4", 00:12:07.384 "traddr": "10.0.0.2", 00:12:07.384 "trsvcid": "4420" 00:12:07.384 } 00:12:07.384 ], 00:12:07.384 "allow_any_host": true, 00:12:07.384 "hosts": [] 00:12:07.384 }, 00:12:07.384 { 00:12:07.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:07.384 "subtype": "NVMe", 00:12:07.384 "listen_addresses": [ 00:12:07.384 { 00:12:07.384 "trtype": "TCP", 00:12:07.384 "adrfam": "IPv4", 00:12:07.384 "traddr": "10.0.0.2", 00:12:07.384 "trsvcid": "4420" 00:12:07.384 } 00:12:07.384 ], 00:12:07.385 "allow_any_host": true, 00:12:07.385 "hosts": [], 00:12:07.385 "serial_number": "SPDK00000000000001", 00:12:07.385 "model_number": "SPDK bdev Controller", 00:12:07.385 "max_namespaces": 32, 00:12:07.385 "min_cntlid": 1, 00:12:07.385 "max_cntlid": 65519, 00:12:07.385 "namespaces": [ 00:12:07.385 { 00:12:07.385 "nsid": 1, 00:12:07.385 "bdev_name": "Null1", 00:12:07.385 "name": "Null1", 00:12:07.385 "nguid": "BA3FFB4923DF4410BC77B101FC7A1900", 00:12:07.385 "uuid": "ba3ffb49-23df-4410-bc77-b101fc7a1900" 00:12:07.385 } 00:12:07.385 ] 00:12:07.385 }, 00:12:07.385 { 00:12:07.385 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:07.385 "subtype": "NVMe", 00:12:07.385 "listen_addresses": [ 00:12:07.385 { 00:12:07.385 "trtype": "TCP", 00:12:07.385 "adrfam": "IPv4", 00:12:07.385 "traddr": "10.0.0.2", 00:12:07.385 "trsvcid": "4420" 00:12:07.385 } 00:12:07.385 ], 00:12:07.385 "allow_any_host": true, 00:12:07.385 "hosts": [], 00:12:07.385 "serial_number": "SPDK00000000000002", 00:12:07.385 "model_number": "SPDK bdev Controller", 00:12:07.385 "max_namespaces": 32, 00:12:07.385 "min_cntlid": 1, 00:12:07.385 "max_cntlid": 65519, 00:12:07.385 "namespaces": [ 00:12:07.385 { 00:12:07.385 "nsid": 1, 00:12:07.385 "bdev_name": "Null2", 00:12:07.385 "name": "Null2", 00:12:07.385 "nguid": "3CA6B1E4EB0D4650820F87F9FAB6C34B", 00:12:07.385 "uuid": "3ca6b1e4-eb0d-4650-820f-87f9fab6c34b" 00:12:07.385 } 00:12:07.385 ] 00:12:07.385 }, 00:12:07.385 { 00:12:07.385 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:07.385 "subtype": "NVMe", 00:12:07.385 "listen_addresses": [ 00:12:07.385 { 00:12:07.385 "trtype": "TCP", 00:12:07.385 "adrfam": "IPv4", 00:12:07.385 "traddr": "10.0.0.2", 00:12:07.385 "trsvcid": "4420" 00:12:07.385 } 00:12:07.385 ], 00:12:07.385 "allow_any_host": true, 00:12:07.385 "hosts": [], 00:12:07.385 "serial_number": "SPDK00000000000003", 00:12:07.385 "model_number": "SPDK bdev Controller", 00:12:07.385 "max_namespaces": 32, 00:12:07.385 "min_cntlid": 1, 00:12:07.385 "max_cntlid": 65519, 00:12:07.385 "namespaces": [ 00:12:07.385 { 00:12:07.385 "nsid": 1, 00:12:07.385 "bdev_name": "Null3", 00:12:07.385 "name": "Null3", 00:12:07.385 "nguid": "E0FAFC17DE034608ACCD6F39173B0C86", 00:12:07.385 "uuid": "e0fafc17-de03-4608-accd-6f39173b0c86" 00:12:07.385 } 00:12:07.385 ] 00:12:07.385 }, 00:12:07.385 { 00:12:07.385 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:07.385 "subtype": "NVMe", 00:12:07.385 "listen_addresses": [ 00:12:07.385 { 00:12:07.385 "trtype": "TCP", 00:12:07.385 "adrfam": "IPv4", 00:12:07.385 "traddr": "10.0.0.2", 00:12:07.385 "trsvcid": "4420" 00:12:07.385 } 00:12:07.385 ], 00:12:07.385 "allow_any_host": true, 00:12:07.385 "hosts": [], 00:12:07.385 "serial_number": "SPDK00000000000004", 00:12:07.385 "model_number": "SPDK bdev Controller", 00:12:07.385 "max_namespaces": 32, 00:12:07.385 "min_cntlid": 1, 00:12:07.385 "max_cntlid": 65519, 00:12:07.385 "namespaces": [ 00:12:07.385 { 00:12:07.385 "nsid": 1, 00:12:07.385 "bdev_name": "Null4", 00:12:07.385 "name": "Null4", 00:12:07.385 "nguid": "AC16D1D472F74BB39C02314C0AFA603A", 00:12:07.385 "uuid": "ac16d1d4-72f7-4bb3-9c02-314c0afa603a" 00:12:07.385 } 00:12:07.385 ] 00:12:07.385 } 00:12:07.385 ] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.385 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.645 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.646 rmmod nvme_tcp 00:12:07.646 rmmod nvme_fabrics 00:12:07.646 rmmod nvme_keyring 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2596576 ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2596576 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2596576 ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2596576 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2596576 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2596576' 00:12:07.646 killing process with pid 2596576 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2596576 00:12:07.646 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2596576 00:12:07.906 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.907 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.819 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.819 00:12:09.819 real 0m11.037s 00:12:09.819 user 0m8.733s 00:12:09.819 sys 0m5.495s 00:12:09.819 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.819 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.819 ************************************ 00:12:09.819 END TEST nvmf_target_discovery 00:12:09.819 ************************************ 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.079 ************************************ 00:12:10.079 START TEST nvmf_referrals 00:12:10.079 ************************************ 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.079 * Looking for test storage... 00:12:10.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:10.079 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.341 --rc genhtml_branch_coverage=1 00:12:10.341 --rc genhtml_function_coverage=1 00:12:10.341 --rc genhtml_legend=1 00:12:10.341 --rc geninfo_all_blocks=1 00:12:10.341 --rc geninfo_unexecuted_blocks=1 00:12:10.341 00:12:10.341 ' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.341 --rc genhtml_branch_coverage=1 00:12:10.341 --rc genhtml_function_coverage=1 00:12:10.341 --rc genhtml_legend=1 00:12:10.341 --rc geninfo_all_blocks=1 00:12:10.341 --rc geninfo_unexecuted_blocks=1 00:12:10.341 00:12:10.341 ' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.341 --rc genhtml_branch_coverage=1 00:12:10.341 --rc genhtml_function_coverage=1 00:12:10.341 --rc genhtml_legend=1 00:12:10.341 --rc geninfo_all_blocks=1 00:12:10.341 --rc geninfo_unexecuted_blocks=1 00:12:10.341 00:12:10.341 ' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:10.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.341 --rc genhtml_branch_coverage=1 00:12:10.341 --rc genhtml_function_coverage=1 00:12:10.341 --rc genhtml_legend=1 00:12:10.341 --rc geninfo_all_blocks=1 00:12:10.341 --rc geninfo_unexecuted_blocks=1 00:12:10.341 00:12:10.341 ' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.341 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.342 16:37:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:16.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:16.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:16.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:16.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:16.927 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.928 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:12:17.187 00:12:17.187 --- 10.0.0.2 ping statistics --- 00:12:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.187 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:12:17.187 00:12:17.187 --- 10.0.0.1 ping statistics --- 00:12:17.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.187 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2600823 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2600823 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2600823 ']' 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.187 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.188 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.188 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.188 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.188 [2024-10-01 16:37:08.835685] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:17.188 [2024-10-01 16:37:08.835748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.447 [2024-10-01 16:37:08.922700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.447 [2024-10-01 16:37:09.016599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.447 [2024-10-01 16:37:09.016655] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.447 [2024-10-01 16:37:09.016664] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.447 [2024-10-01 16:37:09.016671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.447 [2024-10-01 16:37:09.016677] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.447 [2024-10-01 16:37:09.016803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.447 [2024-10-01 16:37:09.016945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.447 [2024-10-01 16:37:09.017101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.447 [2024-10-01 16:37:09.017295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 [2024-10-01 16:37:09.782441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 [2024-10-01 16:37:09.791366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.389 16:37:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.650 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.911 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.912 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.172 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.432 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.432 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.693 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.952 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.212 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.213 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.213 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.473 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.473 rmmod nvme_tcp 00:12:20.473 rmmod nvme_fabrics 00:12:20.473 rmmod nvme_keyring 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2600823 ']' 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2600823 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2600823 ']' 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2600823 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2600823 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2600823' 00:12:20.473 killing process with pid 2600823 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2600823 00:12:20.473 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2600823 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.734 16:37:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.643 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.643 00:12:22.643 real 0m12.725s 00:12:22.643 user 0m15.953s 00:12:22.643 sys 0m6.022s 00:12:22.643 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.643 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 ************************************ 00:12:22.643 END TEST nvmf_referrals 00:12:22.643 ************************************ 00:12:22.904 16:37:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:22.904 16:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:22.904 16:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.904 16:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.904 ************************************ 00:12:22.904 START TEST nvmf_connect_disconnect 00:12:22.904 ************************************ 00:12:22.904 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:22.905 * Looking for test storage... 00:12:22.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.905 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.166 --rc genhtml_branch_coverage=1 00:12:23.166 --rc genhtml_function_coverage=1 00:12:23.166 --rc genhtml_legend=1 00:12:23.166 --rc geninfo_all_blocks=1 00:12:23.166 --rc geninfo_unexecuted_blocks=1 00:12:23.166 00:12:23.166 ' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.166 --rc genhtml_branch_coverage=1 00:12:23.166 --rc genhtml_function_coverage=1 00:12:23.166 --rc genhtml_legend=1 00:12:23.166 --rc geninfo_all_blocks=1 00:12:23.166 --rc geninfo_unexecuted_blocks=1 00:12:23.166 00:12:23.166 ' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.166 --rc genhtml_branch_coverage=1 00:12:23.166 --rc genhtml_function_coverage=1 00:12:23.166 --rc genhtml_legend=1 00:12:23.166 --rc geninfo_all_blocks=1 00:12:23.166 --rc geninfo_unexecuted_blocks=1 00:12:23.166 00:12:23.166 ' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:23.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.166 --rc genhtml_branch_coverage=1 00:12:23.166 --rc genhtml_function_coverage=1 00:12:23.166 --rc genhtml_legend=1 00:12:23.166 --rc geninfo_all_blocks=1 00:12:23.166 --rc geninfo_unexecuted_blocks=1 00:12:23.166 00:12:23.166 ' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:23.166 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.167 16:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.303 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:12:31.304 00:12:31.304 --- 10.0.0.2 ping statistics --- 00:12:31.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.304 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:12:31.304 00:12:31.304 --- 10.0.0.1 ping statistics --- 00:12:31.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.304 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2605438 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2605438 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2605438 ']' 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.304 16:37:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.304 [2024-10-01 16:37:21.945622] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:31.304 [2024-10-01 16:37:21.945686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.304 [2024-10-01 16:37:22.032049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.304 [2024-10-01 16:37:22.127485] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.304 [2024-10-01 16:37:22.127537] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.304 [2024-10-01 16:37:22.127546] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.304 [2024-10-01 16:37:22.127553] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.304 [2024-10-01 16:37:22.127560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.304 [2024-10-01 16:37:22.127684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.304 [2024-10-01 16:37:22.127813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.304 [2024-10-01 16:37:22.127958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.304 [2024-10-01 16:37:22.127960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.304 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.305 [2024-10-01 16:37:22.889383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:31.305 [2024-10-01 16:37:22.937208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:31.305 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:35.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.596 rmmod nvme_tcp 00:12:49.596 rmmod nvme_fabrics 00:12:49.596 rmmod nvme_keyring 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2605438 ']' 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2605438 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2605438 ']' 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2605438 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.596 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2605438 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2605438' 00:12:49.856 killing process with pid 2605438 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2605438 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2605438 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.856 16:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.399 00:12:52.399 real 0m29.118s 00:12:52.399 user 1m18.904s 00:12:52.399 sys 0m6.938s 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.399 ************************************ 00:12:52.399 END TEST nvmf_connect_disconnect 00:12:52.399 ************************************ 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.399 ************************************ 00:12:52.399 START TEST nvmf_multitarget 00:12:52.399 ************************************ 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:52.399 * Looking for test storage... 00:12:52.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.399 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:52.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.400 --rc genhtml_branch_coverage=1 00:12:52.400 --rc genhtml_function_coverage=1 00:12:52.400 --rc genhtml_legend=1 00:12:52.400 --rc geninfo_all_blocks=1 00:12:52.400 --rc geninfo_unexecuted_blocks=1 00:12:52.400 00:12:52.400 ' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:52.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.400 --rc genhtml_branch_coverage=1 00:12:52.400 --rc genhtml_function_coverage=1 00:12:52.400 --rc genhtml_legend=1 00:12:52.400 --rc geninfo_all_blocks=1 00:12:52.400 --rc geninfo_unexecuted_blocks=1 00:12:52.400 00:12:52.400 ' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:52.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.400 --rc genhtml_branch_coverage=1 00:12:52.400 --rc genhtml_function_coverage=1 00:12:52.400 --rc genhtml_legend=1 00:12:52.400 --rc geninfo_all_blocks=1 00:12:52.400 --rc geninfo_unexecuted_blocks=1 00:12:52.400 00:12:52.400 ' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:52.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.400 --rc genhtml_branch_coverage=1 00:12:52.400 --rc genhtml_function_coverage=1 00:12:52.400 --rc genhtml_legend=1 00:12:52.400 --rc geninfo_all_blocks=1 00:12:52.400 --rc geninfo_unexecuted_blocks=1 00:12:52.400 00:12:52.400 ' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.400 16:37:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.981 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:58.982 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:58.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:58.982 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:58.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.982 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:12:59.243 00:12:59.243 --- 10.0.0.2 ping statistics --- 00:12:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.243 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:12:59.243 00:12:59.243 --- 10.0.0.1 ping statistics --- 00:12:59.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.243 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:59.243 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2612638 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2612638 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.503 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2612638 ']' 00:12:59.504 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.504 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.504 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.504 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.504 16:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.504 [2024-10-01 16:37:51.026330] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:12:59.504 [2024-10-01 16:37:51.026377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.504 [2024-10-01 16:37:51.109139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.504 [2024-10-01 16:37:51.171301] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.504 [2024-10-01 16:37:51.171336] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.504 [2024-10-01 16:37:51.171344] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.504 [2024-10-01 16:37:51.171351] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.504 [2024-10-01 16:37:51.171356] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.504 [2024-10-01 16:37:51.171474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.504 [2024-10-01 16:37:51.171491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.504 [2024-10-01 16:37:51.171614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.504 [2024-10-01 16:37:51.171617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.444 16:37:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:00.444 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:00.444 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:00.721 "nvmf_tgt_1" 00:13:00.721 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:00.721 "nvmf_tgt_2" 00:13:00.721 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:00.721 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:01.022 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:01.022 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:01.022 true 00:13:01.022 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:01.022 true 00:13:01.022 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.022 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.314 rmmod nvme_tcp 00:13:01.314 rmmod nvme_fabrics 00:13:01.314 rmmod nvme_keyring 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2612638 ']' 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2612638 00:13:01.314 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2612638 ']' 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2612638 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2612638 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2612638' 00:13:01.315 killing process with pid 2612638 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2612638 00:13:01.315 16:37:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2612638 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.600 16:37:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.509 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.509 00:13:03.509 real 0m11.506s 00:13:03.509 user 0m10.334s 00:13:03.509 sys 0m5.818s 00:13:03.509 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.509 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.509 ************************************ 00:13:03.509 END TEST nvmf_multitarget 00:13:03.509 ************************************ 00:13:03.510 16:37:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:03.510 16:37:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:03.510 16:37:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.510 16:37:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.510 ************************************ 00:13:03.510 START TEST nvmf_rpc 00:13:03.510 ************************************ 00:13:03.510 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:03.770 * Looking for test storage... 00:13:03.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:03.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.770 --rc genhtml_branch_coverage=1 00:13:03.770 --rc genhtml_function_coverage=1 00:13:03.770 --rc genhtml_legend=1 00:13:03.770 --rc geninfo_all_blocks=1 00:13:03.770 --rc geninfo_unexecuted_blocks=1 00:13:03.770 00:13:03.770 ' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:03.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.770 --rc genhtml_branch_coverage=1 00:13:03.770 --rc genhtml_function_coverage=1 00:13:03.770 --rc genhtml_legend=1 00:13:03.770 --rc geninfo_all_blocks=1 00:13:03.770 --rc geninfo_unexecuted_blocks=1 00:13:03.770 00:13:03.770 ' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:03.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.770 --rc genhtml_branch_coverage=1 00:13:03.770 --rc genhtml_function_coverage=1 00:13:03.770 --rc genhtml_legend=1 00:13:03.770 --rc geninfo_all_blocks=1 00:13:03.770 --rc geninfo_unexecuted_blocks=1 00:13:03.770 00:13:03.770 ' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:03.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.770 --rc genhtml_branch_coverage=1 00:13:03.770 --rc genhtml_function_coverage=1 00:13:03.770 --rc genhtml_legend=1 00:13:03.770 --rc geninfo_all_blocks=1 00:13:03.770 --rc geninfo_unexecuted_blocks=1 00:13:03.770 00:13:03.770 ' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.770 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.771 16:37:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.908 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:13:11.909 00:13:11.909 --- 10.0.0.2 ping statistics --- 00:13:11.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.909 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:13:11.909 00:13:11.909 --- 10.0.0.1 ping statistics --- 00:13:11.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.909 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2617178 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2617178 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2617178 ']' 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.909 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 [2024-10-01 16:38:03.024307] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:11.909 [2024-10-01 16:38:03.024370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.909 [2024-10-01 16:38:03.114290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.909 [2024-10-01 16:38:03.209092] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.909 [2024-10-01 16:38:03.209156] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.909 [2024-10-01 16:38:03.209164] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.909 [2024-10-01 16:38:03.209171] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.909 [2024-10-01 16:38:03.209177] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.909 [2024-10-01 16:38:03.209311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.909 [2024-10-01 16:38:03.209449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.909 [2024-10-01 16:38:03.209582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.909 [2024-10-01 16:38:03.209585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.480 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:12.480 "tick_rate": 2600000000, 00:13:12.480 "poll_groups": [ 00:13:12.480 { 00:13:12.480 "name": "nvmf_tgt_poll_group_000", 00:13:12.480 "admin_qpairs": 0, 00:13:12.480 "io_qpairs": 0, 00:13:12.480 "current_admin_qpairs": 0, 00:13:12.480 "current_io_qpairs": 0, 00:13:12.480 "pending_bdev_io": 0, 00:13:12.480 "completed_nvme_io": 0, 00:13:12.480 "transports": [] 00:13:12.480 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_001", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [] 00:13:12.481 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_002", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [] 00:13:12.481 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_003", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [] 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 }' 00:13:12.481 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:12.481 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:12.481 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:12.481 16:38:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.481 [2024-10-01 16:38:04.075811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:12.481 "tick_rate": 2600000000, 00:13:12.481 "poll_groups": [ 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_000", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [ 00:13:12.481 { 00:13:12.481 "trtype": "TCP" 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_001", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [ 00:13:12.481 { 00:13:12.481 "trtype": "TCP" 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_002", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [ 00:13:12.481 { 00:13:12.481 "trtype": "TCP" 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 }, 00:13:12.481 { 00:13:12.481 "name": "nvmf_tgt_poll_group_003", 00:13:12.481 "admin_qpairs": 0, 00:13:12.481 "io_qpairs": 0, 00:13:12.481 "current_admin_qpairs": 0, 00:13:12.481 "current_io_qpairs": 0, 00:13:12.481 "pending_bdev_io": 0, 00:13:12.481 "completed_nvme_io": 0, 00:13:12.481 "transports": [ 00:13:12.481 { 00:13:12.481 "trtype": "TCP" 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 } 00:13:12.481 ] 00:13:12.481 }' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:12.481 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 Malloc1 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 [2024-10-01 16:38:04.248405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:12.742 [2024-10-01 16:38:04.281331] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:12.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:12.742 could not add new controller: failed to write to nvme-fabrics device 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.742 16:38:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.125 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.125 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.125 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.125 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:14.125 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.666 [2024-10-01 16:38:07.958400] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:16.666 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:16.666 could not add new controller: failed to write to nvme-fabrics device 00:13:16.666 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.667 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.049 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.049 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.049 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.049 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.049 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.957 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 [2024-10-01 16:38:11.709161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.217 16:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.625 16:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.625 16:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.625 16:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.625 16:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:21.625 16:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.183 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.184 [2024-10-01 16:38:15.432997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.184 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.563 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.563 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.563 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.563 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:25.563 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:27.473 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:27.473 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:27.473 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.473 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:27.473 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.473 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:27.473 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 [2024-10-01 16:38:19.246325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.734 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.640 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.640 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.640 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.640 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.640 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 [2024-10-01 16:38:22.976844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.550 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.929 16:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.929 16:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.929 16:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.929 16:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:32.929 16:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 [2024-10-01 16:38:26.716112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.468 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.847 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.847 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.847 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.847 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:36.847 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.758 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 [2024-10-01 16:38:30.467169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 [2024-10-01 16:38:30.535337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.019 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 [2024-10-01 16:38:30.603558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 [2024-10-01 16:38:30.671737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.020 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 [2024-10-01 16:38:30.735942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:39.280 "tick_rate": 2600000000, 00:13:39.280 "poll_groups": [ 00:13:39.280 { 00:13:39.280 "name": "nvmf_tgt_poll_group_000", 00:13:39.280 "admin_qpairs": 0, 00:13:39.280 "io_qpairs": 224, 00:13:39.280 "current_admin_qpairs": 0, 00:13:39.280 "current_io_qpairs": 0, 00:13:39.280 "pending_bdev_io": 0, 00:13:39.280 "completed_nvme_io": 506, 00:13:39.280 "transports": [ 00:13:39.280 { 00:13:39.280 "trtype": "TCP" 00:13:39.280 } 00:13:39.280 ] 00:13:39.280 }, 00:13:39.280 { 00:13:39.280 "name": "nvmf_tgt_poll_group_001", 00:13:39.280 "admin_qpairs": 1, 00:13:39.280 "io_qpairs": 223, 00:13:39.280 "current_admin_qpairs": 0, 00:13:39.280 "current_io_qpairs": 0, 00:13:39.280 "pending_bdev_io": 0, 00:13:39.280 "completed_nvme_io": 225, 00:13:39.280 "transports": [ 00:13:39.280 { 00:13:39.280 "trtype": "TCP" 00:13:39.280 } 00:13:39.280 ] 00:13:39.280 }, 00:13:39.280 { 00:13:39.280 "name": "nvmf_tgt_poll_group_002", 00:13:39.280 "admin_qpairs": 6, 00:13:39.280 "io_qpairs": 218, 00:13:39.280 "current_admin_qpairs": 0, 00:13:39.280 "current_io_qpairs": 0, 00:13:39.280 "pending_bdev_io": 0, 00:13:39.280 "completed_nvme_io": 220, 00:13:39.280 "transports": [ 00:13:39.280 { 00:13:39.280 "trtype": "TCP" 00:13:39.280 } 00:13:39.280 ] 00:13:39.280 }, 00:13:39.280 { 00:13:39.280 "name": "nvmf_tgt_poll_group_003", 00:13:39.280 "admin_qpairs": 0, 00:13:39.280 "io_qpairs": 224, 00:13:39.280 "current_admin_qpairs": 0, 00:13:39.280 "current_io_qpairs": 0, 00:13:39.280 "pending_bdev_io": 0, 00:13:39.280 "completed_nvme_io": 288, 00:13:39.280 "transports": [ 00:13:39.280 { 00:13:39.280 "trtype": "TCP" 00:13:39.280 } 00:13:39.280 ] 00:13:39.280 } 00:13:39.280 ] 00:13:39.280 }' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.280 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.280 rmmod nvme_tcp 00:13:39.280 rmmod nvme_fabrics 00:13:39.280 rmmod nvme_keyring 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2617178 ']' 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2617178 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2617178 ']' 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2617178 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.540 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2617178 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2617178' 00:13:39.540 killing process with pid 2617178 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2617178 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2617178 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.540 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.082 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.082 00:13:42.082 real 0m38.095s 00:13:42.082 user 1m53.912s 00:13:42.082 sys 0m7.853s 00:13:42.082 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.082 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.082 ************************************ 00:13:42.083 END TEST nvmf_rpc 00:13:42.083 ************************************ 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.083 ************************************ 00:13:42.083 START TEST nvmf_invalid 00:13:42.083 ************************************ 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:42.083 * Looking for test storage... 00:13:42.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.083 --rc genhtml_branch_coverage=1 00:13:42.083 --rc genhtml_function_coverage=1 00:13:42.083 --rc genhtml_legend=1 00:13:42.083 --rc geninfo_all_blocks=1 00:13:42.083 --rc geninfo_unexecuted_blocks=1 00:13:42.083 00:13:42.083 ' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.083 --rc genhtml_branch_coverage=1 00:13:42.083 --rc genhtml_function_coverage=1 00:13:42.083 --rc genhtml_legend=1 00:13:42.083 --rc geninfo_all_blocks=1 00:13:42.083 --rc geninfo_unexecuted_blocks=1 00:13:42.083 00:13:42.083 ' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.083 --rc genhtml_branch_coverage=1 00:13:42.083 --rc genhtml_function_coverage=1 00:13:42.083 --rc genhtml_legend=1 00:13:42.083 --rc geninfo_all_blocks=1 00:13:42.083 --rc geninfo_unexecuted_blocks=1 00:13:42.083 00:13:42.083 ' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.083 --rc genhtml_branch_coverage=1 00:13:42.083 --rc genhtml_function_coverage=1 00:13:42.083 --rc genhtml_legend=1 00:13:42.083 --rc geninfo_all_blocks=1 00:13:42.083 --rc geninfo_unexecuted_blocks=1 00:13:42.083 00:13:42.083 ' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.083 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.084 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:48.671 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:48.671 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:48.671 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.671 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:48.671 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.672 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:13:48.933 00:13:48.933 --- 10.0.0.2 ping statistics --- 00:13:48.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.933 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:13:48.933 00:13:48.933 --- 10.0.0.1 ping statistics --- 00:13:48.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.933 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2625831 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2625831 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2625831 ']' 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.933 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:48.933 [2024-10-01 16:38:40.475616] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:13:48.934 [2024-10-01 16:38:40.475676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.934 [2024-10-01 16:38:40.563196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.194 [2024-10-01 16:38:40.657134] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.194 [2024-10-01 16:38:40.657187] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.194 [2024-10-01 16:38:40.657195] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.194 [2024-10-01 16:38:40.657202] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.194 [2024-10-01 16:38:40.657208] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.195 [2024-10-01 16:38:40.657333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.195 [2024-10-01 16:38:40.657473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.195 [2024-10-01 16:38:40.657596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.195 [2024-10-01 16:38:40.657600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:49.765 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20767 00:13:50.026 [2024-10-01 16:38:41.610886] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:50.026 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:50.026 { 00:13:50.026 "nqn": "nqn.2016-06.io.spdk:cnode20767", 00:13:50.026 "tgt_name": "foobar", 00:13:50.026 "method": "nvmf_create_subsystem", 00:13:50.026 "req_id": 1 00:13:50.026 } 00:13:50.026 Got JSON-RPC error response 00:13:50.026 response: 00:13:50.026 { 00:13:50.026 "code": -32603, 00:13:50.026 "message": "Unable to find target foobar" 00:13:50.026 }' 00:13:50.026 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:50.026 { 00:13:50.026 "nqn": "nqn.2016-06.io.spdk:cnode20767", 00:13:50.026 "tgt_name": "foobar", 00:13:50.026 "method": "nvmf_create_subsystem", 00:13:50.026 "req_id": 1 00:13:50.026 } 00:13:50.026 Got JSON-RPC error response 00:13:50.026 response: 00:13:50.026 { 00:13:50.026 "code": -32603, 00:13:50.026 "message": "Unable to find target foobar" 00:13:50.026 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:50.026 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:50.026 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24158 00:13:50.287 [2024-10-01 16:38:41.827611] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24158: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:50.287 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:50.287 { 00:13:50.287 "nqn": "nqn.2016-06.io.spdk:cnode24158", 00:13:50.287 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:50.287 "method": "nvmf_create_subsystem", 00:13:50.287 "req_id": 1 00:13:50.287 } 00:13:50.287 Got JSON-RPC error response 00:13:50.287 response: 00:13:50.287 { 00:13:50.287 "code": -32602, 00:13:50.287 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:50.287 }' 00:13:50.287 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:50.287 { 00:13:50.287 "nqn": "nqn.2016-06.io.spdk:cnode24158", 00:13:50.287 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:50.287 "method": "nvmf_create_subsystem", 00:13:50.287 "req_id": 1 00:13:50.287 } 00:13:50.287 Got JSON-RPC error response 00:13:50.287 response: 00:13:50.287 { 00:13:50.287 "code": -32602, 00:13:50.287 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:50.287 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:50.287 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:50.287 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10871 00:13:50.549 [2024-10-01 16:38:42.048284] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10871: invalid model number 'SPDK_Controller' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:50.549 { 00:13:50.549 "nqn": "nqn.2016-06.io.spdk:cnode10871", 00:13:50.549 "model_number": "SPDK_Controller\u001f", 00:13:50.549 "method": "nvmf_create_subsystem", 00:13:50.549 "req_id": 1 00:13:50.549 } 00:13:50.549 Got JSON-RPC error response 00:13:50.549 response: 00:13:50.549 { 00:13:50.549 "code": -32602, 00:13:50.549 "message": "Invalid MN SPDK_Controller\u001f" 00:13:50.549 }' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:50.549 { 00:13:50.549 "nqn": "nqn.2016-06.io.spdk:cnode10871", 00:13:50.549 "model_number": "SPDK_Controller\u001f", 00:13:50.549 "method": "nvmf_create_subsystem", 00:13:50.549 "req_id": 1 00:13:50.549 } 00:13:50.549 Got JSON-RPC error response 00:13:50.549 response: 00:13:50.549 { 00:13:50.549 "code": -32602, 00:13:50.549 "message": "Invalid MN SPDK_Controller\u001f" 00:13:50.549 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:50.549 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:50.550 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' @"Jc`]tL2`={QmaE\OH<' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' @"Jc`]tL2`={QmaE\OH<' nqn.2016-06.io.spdk:cnode25350 00:13:50.811 [2024-10-01 16:38:42.425502] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25350: invalid serial number ' @"Jc`]tL2`={QmaE\OH<' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:50.811 { 00:13:50.811 "nqn": "nqn.2016-06.io.spdk:cnode25350", 00:13:50.811 "serial_number": " @\"Jc`]tL2`={QmaE\\OH<", 00:13:50.811 "method": "nvmf_create_subsystem", 00:13:50.811 "req_id": 1 00:13:50.811 } 00:13:50.811 Got JSON-RPC error response 00:13:50.811 response: 00:13:50.811 { 00:13:50.811 "code": -32602, 00:13:50.811 "message": "Invalid SN @\"Jc`]tL2`={QmaE\\OH<" 00:13:50.811 }' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:50.811 { 00:13:50.811 "nqn": "nqn.2016-06.io.spdk:cnode25350", 00:13:50.811 "serial_number": " @\"Jc`]tL2`={QmaE\\OH<", 00:13:50.811 "method": "nvmf_create_subsystem", 00:13:50.811 "req_id": 1 00:13:50.811 } 00:13:50.811 Got JSON-RPC error response 00:13:50.811 response: 00:13:50.811 { 00:13:50.811 "code": -32602, 00:13:50.811 "message": "Invalid SN @\"Jc`]tL2`={QmaE\\OH<" 00:13:50.811 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:50.811 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:51.072 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:51.072 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.072 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.072 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:51.072 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.073 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.074 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"&PzRguO,|^,8>}?z=;e2QW*JCbj'\''Ff."::!&",9O' 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '"&PzRguO,|^,8>}?z=;e2QW*JCbj'\''Ff."::!&",9O' nqn.2016-06.io.spdk:cnode5685 00:13:51.334 [2024-10-01 16:38:42.955184] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5685: invalid model number '"&PzRguO,|^,8>}?z=;e2QW*JCbj'Ff."::!&",9O' 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:51.334 { 00:13:51.334 "nqn": "nqn.2016-06.io.spdk:cnode5685", 00:13:51.334 "model_number": "\"&PzRguO,|^,8>}?z=;e2QW*JCbj'\''Ff.\"::!&\",9O", 00:13:51.334 "method": "nvmf_create_subsystem", 00:13:51.334 "req_id": 1 00:13:51.334 } 00:13:51.334 Got JSON-RPC error response 00:13:51.334 response: 00:13:51.334 { 00:13:51.334 "code": -32602, 00:13:51.334 "message": "Invalid MN \"&PzRguO,|^,8>}?z=;e2QW*JCbj'\''Ff.\"::!&\",9O" 00:13:51.334 }' 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:51.334 { 00:13:51.334 "nqn": "nqn.2016-06.io.spdk:cnode5685", 00:13:51.334 "model_number": "\"&PzRguO,|^,8>}?z=;e2QW*JCbj'Ff.\"::!&\",9O", 00:13:51.334 "method": "nvmf_create_subsystem", 00:13:51.334 "req_id": 1 00:13:51.334 } 00:13:51.334 Got JSON-RPC error response 00:13:51.334 response: 00:13:51.334 { 00:13:51.334 "code": -32602, 00:13:51.334 "message": "Invalid MN \"&PzRguO,|^,8>}?z=;e2QW*JCbj'Ff.\"::!&\",9O" 00:13:51.334 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:51.334 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:51.594 [2024-10-01 16:38:43.167952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.594 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:51.854 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:51.854 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:51.854 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:51.854 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:51.854 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:52.115 [2024-10-01 16:38:43.614341] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:52.115 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:52.115 { 00:13:52.115 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:52.115 "listen_address": { 00:13:52.115 "trtype": "tcp", 00:13:52.115 "traddr": "", 00:13:52.115 "trsvcid": "4421" 00:13:52.115 }, 00:13:52.115 "method": "nvmf_subsystem_remove_listener", 00:13:52.115 "req_id": 1 00:13:52.115 } 00:13:52.115 Got JSON-RPC error response 00:13:52.115 response: 00:13:52.115 { 00:13:52.115 "code": -32602, 00:13:52.115 "message": "Invalid parameters" 00:13:52.115 }' 00:13:52.115 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:52.115 { 00:13:52.115 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:52.115 "listen_address": { 00:13:52.115 "trtype": "tcp", 00:13:52.115 "traddr": "", 00:13:52.115 "trsvcid": "4421" 00:13:52.115 }, 00:13:52.115 "method": "nvmf_subsystem_remove_listener", 00:13:52.115 "req_id": 1 00:13:52.115 } 00:13:52.115 Got JSON-RPC error response 00:13:52.115 response: 00:13:52.115 { 00:13:52.115 "code": -32602, 00:13:52.115 "message": "Invalid parameters" 00:13:52.115 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:52.115 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14203 -i 0 00:13:52.375 [2024-10-01 16:38:43.831036] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14203: invalid cntlid range [0-65519] 00:13:52.375 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:52.375 { 00:13:52.375 "nqn": "nqn.2016-06.io.spdk:cnode14203", 00:13:52.375 "min_cntlid": 0, 00:13:52.375 "method": "nvmf_create_subsystem", 00:13:52.375 "req_id": 1 00:13:52.375 } 00:13:52.375 Got JSON-RPC error response 00:13:52.375 response: 00:13:52.375 { 00:13:52.375 "code": -32602, 00:13:52.375 "message": "Invalid cntlid range [0-65519]" 00:13:52.375 }' 00:13:52.375 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:52.375 { 00:13:52.375 "nqn": "nqn.2016-06.io.spdk:cnode14203", 00:13:52.375 "min_cntlid": 0, 00:13:52.375 "method": "nvmf_create_subsystem", 00:13:52.375 "req_id": 1 00:13:52.375 } 00:13:52.375 Got JSON-RPC error response 00:13:52.375 response: 00:13:52.375 { 00:13:52.375 "code": -32602, 00:13:52.375 "message": "Invalid cntlid range [0-65519]" 00:13:52.375 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:52.375 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27717 -i 65520 00:13:52.375 [2024-10-01 16:38:44.047660] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27717: invalid cntlid range [65520-65519] 00:13:52.636 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:52.636 { 00:13:52.636 "nqn": "nqn.2016-06.io.spdk:cnode27717", 00:13:52.636 "min_cntlid": 65520, 00:13:52.636 "method": "nvmf_create_subsystem", 00:13:52.636 "req_id": 1 00:13:52.636 } 00:13:52.636 Got JSON-RPC error response 00:13:52.636 response: 00:13:52.636 { 00:13:52.636 "code": -32602, 00:13:52.636 "message": "Invalid cntlid range [65520-65519]" 00:13:52.636 }' 00:13:52.636 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:52.636 { 00:13:52.636 "nqn": "nqn.2016-06.io.spdk:cnode27717", 00:13:52.636 "min_cntlid": 65520, 00:13:52.637 "method": "nvmf_create_subsystem", 00:13:52.637 "req_id": 1 00:13:52.637 } 00:13:52.637 Got JSON-RPC error response 00:13:52.637 response: 00:13:52.637 { 00:13:52.637 "code": -32602, 00:13:52.637 "message": "Invalid cntlid range [65520-65519]" 00:13:52.637 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:52.637 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23830 -I 0 00:13:52.637 [2024-10-01 16:38:44.264335] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23830: invalid cntlid range [1-0] 00:13:52.637 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:52.637 { 00:13:52.637 "nqn": "nqn.2016-06.io.spdk:cnode23830", 00:13:52.637 "max_cntlid": 0, 00:13:52.637 "method": "nvmf_create_subsystem", 00:13:52.637 "req_id": 1 00:13:52.637 } 00:13:52.637 Got JSON-RPC error response 00:13:52.637 response: 00:13:52.637 { 00:13:52.637 "code": -32602, 00:13:52.637 "message": "Invalid cntlid range [1-0]" 00:13:52.637 }' 00:13:52.637 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:52.637 { 00:13:52.637 "nqn": "nqn.2016-06.io.spdk:cnode23830", 00:13:52.637 "max_cntlid": 0, 00:13:52.637 "method": "nvmf_create_subsystem", 00:13:52.637 "req_id": 1 00:13:52.637 } 00:13:52.637 Got JSON-RPC error response 00:13:52.637 response: 00:13:52.637 { 00:13:52.637 "code": -32602, 00:13:52.637 "message": "Invalid cntlid range [1-0]" 00:13:52.637 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:52.637 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3505 -I 65520 00:13:52.897 [2024-10-01 16:38:44.481041] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3505: invalid cntlid range [1-65520] 00:13:52.897 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:52.897 { 00:13:52.897 "nqn": "nqn.2016-06.io.spdk:cnode3505", 00:13:52.897 "max_cntlid": 65520, 00:13:52.897 "method": "nvmf_create_subsystem", 00:13:52.897 "req_id": 1 00:13:52.897 } 00:13:52.897 Got JSON-RPC error response 00:13:52.897 response: 00:13:52.897 { 00:13:52.897 "code": -32602, 00:13:52.897 "message": "Invalid cntlid range [1-65520]" 00:13:52.897 }' 00:13:52.897 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:52.897 { 00:13:52.897 "nqn": "nqn.2016-06.io.spdk:cnode3505", 00:13:52.897 "max_cntlid": 65520, 00:13:52.897 "method": "nvmf_create_subsystem", 00:13:52.897 "req_id": 1 00:13:52.897 } 00:13:52.897 Got JSON-RPC error response 00:13:52.897 response: 00:13:52.897 { 00:13:52.897 "code": -32602, 00:13:52.897 "message": "Invalid cntlid range [1-65520]" 00:13:52.897 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:52.897 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4675 -i 6 -I 5 00:13:53.157 [2024-10-01 16:38:44.685688] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4675: invalid cntlid range [6-5] 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:53.157 { 00:13:53.157 "nqn": "nqn.2016-06.io.spdk:cnode4675", 00:13:53.157 "min_cntlid": 6, 00:13:53.157 "max_cntlid": 5, 00:13:53.157 "method": "nvmf_create_subsystem", 00:13:53.157 "req_id": 1 00:13:53.157 } 00:13:53.157 Got JSON-RPC error response 00:13:53.157 response: 00:13:53.157 { 00:13:53.157 "code": -32602, 00:13:53.157 "message": "Invalid cntlid range [6-5]" 00:13:53.157 }' 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:53.157 { 00:13:53.157 "nqn": "nqn.2016-06.io.spdk:cnode4675", 00:13:53.157 "min_cntlid": 6, 00:13:53.157 "max_cntlid": 5, 00:13:53.157 "method": "nvmf_create_subsystem", 00:13:53.157 "req_id": 1 00:13:53.157 } 00:13:53.157 Got JSON-RPC error response 00:13:53.157 response: 00:13:53.157 { 00:13:53.157 "code": -32602, 00:13:53.157 "message": "Invalid cntlid range [6-5]" 00:13:53.157 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:53.157 { 00:13:53.157 "name": "foobar", 00:13:53.157 "method": "nvmf_delete_target", 00:13:53.157 "req_id": 1 00:13:53.157 } 00:13:53.157 Got JSON-RPC error response 00:13:53.157 response: 00:13:53.157 { 00:13:53.157 "code": -32602, 00:13:53.157 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:53.157 }' 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:53.157 { 00:13:53.157 "name": "foobar", 00:13:53.157 "method": "nvmf_delete_target", 00:13:53.157 "req_id": 1 00:13:53.157 } 00:13:53.157 Got JSON-RPC error response 00:13:53.157 response: 00:13:53.157 { 00:13:53.157 "code": -32602, 00:13:53.157 "message": "The specified target doesn't exist, cannot delete it." 00:13:53.157 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.157 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.157 rmmod nvme_tcp 00:13:53.419 rmmod nvme_fabrics 00:13:53.419 rmmod nvme_keyring 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2625831 ']' 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2625831 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2625831 ']' 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2625831 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2625831 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2625831' 00:13:53.419 killing process with pid 2625831 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2625831 00:13:53.419 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2625831 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.419 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:55.962 00:13:55.962 real 0m13.799s 00:13:55.962 user 0m22.496s 00:13:55.962 sys 0m6.064s 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:55.962 ************************************ 00:13:55.962 END TEST nvmf_invalid 00:13:55.962 ************************************ 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.962 ************************************ 00:13:55.962 START TEST nvmf_connect_stress 00:13:55.962 ************************************ 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:55.962 * Looking for test storage... 00:13:55.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.962 --rc genhtml_branch_coverage=1 00:13:55.962 --rc genhtml_function_coverage=1 00:13:55.962 --rc genhtml_legend=1 00:13:55.962 --rc geninfo_all_blocks=1 00:13:55.962 --rc geninfo_unexecuted_blocks=1 00:13:55.962 00:13:55.962 ' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.962 --rc genhtml_branch_coverage=1 00:13:55.962 --rc genhtml_function_coverage=1 00:13:55.962 --rc genhtml_legend=1 00:13:55.962 --rc geninfo_all_blocks=1 00:13:55.962 --rc geninfo_unexecuted_blocks=1 00:13:55.962 00:13:55.962 ' 00:13:55.962 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.963 --rc genhtml_branch_coverage=1 00:13:55.963 --rc genhtml_function_coverage=1 00:13:55.963 --rc genhtml_legend=1 00:13:55.963 --rc geninfo_all_blocks=1 00:13:55.963 --rc geninfo_unexecuted_blocks=1 00:13:55.963 00:13:55.963 ' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:55.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.963 --rc genhtml_branch_coverage=1 00:13:55.963 --rc genhtml_function_coverage=1 00:13:55.963 --rc genhtml_legend=1 00:13:55.963 --rc geninfo_all_blocks=1 00:13:55.963 --rc geninfo_unexecuted_blocks=1 00:13:55.963 00:13:55.963 ' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:55.963 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.211 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.211 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:14:04.212 00:14:04.212 --- 10.0.0.2 ping statistics --- 00:14:04.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.212 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:14:04.212 00:14:04.212 --- 10.0.0.1 ping statistics --- 00:14:04.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.212 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2630804 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2630804 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2630804 ']' 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 [2024-10-01 16:38:55.218150] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:04.212 [2024-10-01 16:38:55.218217] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.212 [2024-10-01 16:38:55.276694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.212 [2024-10-01 16:38:55.330914] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.212 [2024-10-01 16:38:55.330950] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.212 [2024-10-01 16:38:55.330956] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.212 [2024-10-01 16:38:55.330961] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.212 [2024-10-01 16:38:55.330966] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.212 [2024-10-01 16:38:55.331091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.212 [2024-10-01 16:38:55.331293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.212 [2024-10-01 16:38:55.331296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 [2024-10-01 16:38:55.448410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 [2024-10-01 16:38:55.472627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 NULL1 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2630854 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.212 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.472 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.472 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:04.472 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.472 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.472 16:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.731 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.731 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:04.731 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.731 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.731 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.991 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.991 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:04.991 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.991 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.991 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.251 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.251 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:05.251 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.251 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.251 16:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.524 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.524 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:05.524 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.524 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.524 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.093 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.093 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:06.093 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.093 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.093 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.353 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.353 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:06.353 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.353 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.353 16:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.612 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.613 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:06.613 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.613 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.613 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.871 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.871 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:06.871 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.871 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.871 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.131 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.131 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:07.131 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.131 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.131 16:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.701 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.701 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:07.701 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.701 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.701 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.961 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.961 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:07.961 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.961 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.961 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.219 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.219 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:08.219 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.219 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.219 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.478 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.478 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:08.478 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.478 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.478 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.737 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.737 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:08.737 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.737 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.737 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.305 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.305 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:09.305 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.305 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.305 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.565 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.565 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:09.565 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.565 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.565 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.824 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.824 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:09.824 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.824 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.824 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.084 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.084 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:10.084 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.084 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.084 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.344 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.344 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:10.344 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.344 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.344 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.914 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.915 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:10.915 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.915 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.915 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.175 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.175 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:11.175 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.175 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.175 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.434 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.434 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:11.434 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.434 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.434 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.694 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.694 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:11.694 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.694 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.694 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.954 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.954 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:11.954 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.954 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.954 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.523 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.523 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:12.523 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.523 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.523 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.782 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.782 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:12.782 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.782 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.783 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.041 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:13.041 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.041 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.041 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.300 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.300 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:13.300 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.300 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.559 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.559 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:13.559 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.559 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.559 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.127 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.127 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:14.127 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.127 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.127 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.127 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.387 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2630854 00:14:14.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2630854) - No such process 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2630854 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.388 rmmod nvme_tcp 00:14:14.388 rmmod nvme_fabrics 00:14:14.388 rmmod nvme_keyring 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2630804 ']' 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2630804 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2630804 ']' 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2630804 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.388 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2630804 00:14:14.388 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.388 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.388 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2630804' 00:14:14.388 killing process with pid 2630804 00:14:14.388 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2630804 00:14:14.388 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2630804 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.648 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.554 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:16.554 00:14:16.554 real 0m21.000s 00:14:16.554 user 0m42.662s 00:14:16.554 sys 0m7.851s 00:14:16.554 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.554 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.554 ************************************ 00:14:16.554 END TEST nvmf_connect_stress 00:14:16.554 ************************************ 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.814 ************************************ 00:14:16.814 START TEST nvmf_fused_ordering 00:14:16.814 ************************************ 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:16.814 * Looking for test storage... 00:14:16.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:16.814 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:16.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.815 --rc genhtml_branch_coverage=1 00:14:16.815 --rc genhtml_function_coverage=1 00:14:16.815 --rc genhtml_legend=1 00:14:16.815 --rc geninfo_all_blocks=1 00:14:16.815 --rc geninfo_unexecuted_blocks=1 00:14:16.815 00:14:16.815 ' 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:16.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.815 --rc genhtml_branch_coverage=1 00:14:16.815 --rc genhtml_function_coverage=1 00:14:16.815 --rc genhtml_legend=1 00:14:16.815 --rc geninfo_all_blocks=1 00:14:16.815 --rc geninfo_unexecuted_blocks=1 00:14:16.815 00:14:16.815 ' 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:16.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.815 --rc genhtml_branch_coverage=1 00:14:16.815 --rc genhtml_function_coverage=1 00:14:16.815 --rc genhtml_legend=1 00:14:16.815 --rc geninfo_all_blocks=1 00:14:16.815 --rc geninfo_unexecuted_blocks=1 00:14:16.815 00:14:16.815 ' 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:16.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.815 --rc genhtml_branch_coverage=1 00:14:16.815 --rc genhtml_function_coverage=1 00:14:16.815 --rc genhtml_legend=1 00:14:16.815 --rc geninfo_all_blocks=1 00:14:16.815 --rc geninfo_unexecuted_blocks=1 00:14:16.815 00:14:16.815 ' 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.815 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.075 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.076 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:25.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:25.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:25.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:25.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.223 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:14:25.224 00:14:25.224 --- 10.0.0.2 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:14:25.224 00:14:25.224 --- 10.0.0.1 ping statistics --- 00:14:25.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.224 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2637177 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2637177 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2637177 ']' 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.224 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 [2024-10-01 16:39:16.020859] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:25.224 [2024-10-01 16:39:16.020908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.224 [2024-10-01 16:39:16.078208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.224 [2024-10-01 16:39:16.132359] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.224 [2024-10-01 16:39:16.132391] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.224 [2024-10-01 16:39:16.132397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.224 [2024-10-01 16:39:16.132402] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.224 [2024-10-01 16:39:16.132407] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.224 [2024-10-01 16:39:16.132424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 [2024-10-01 16:39:16.249455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 [2024-10-01 16:39:16.265634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 NULL1 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.224 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:25.224 [2024-10-01 16:39:16.318075] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:25.224 [2024-10-01 16:39:16.318119] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637201 ] 00:14:25.224 Attached to nqn.2016-06.io.spdk:cnode1 00:14:25.224 Namespace ID: 1 size: 1GB 00:14:25.224 fused_ordering(0) 00:14:25.224 fused_ordering(1) 00:14:25.224 fused_ordering(2) 00:14:25.224 fused_ordering(3) 00:14:25.224 fused_ordering(4) 00:14:25.224 fused_ordering(5) 00:14:25.224 fused_ordering(6) 00:14:25.224 fused_ordering(7) 00:14:25.224 fused_ordering(8) 00:14:25.224 fused_ordering(9) 00:14:25.224 fused_ordering(10) 00:14:25.224 fused_ordering(11) 00:14:25.224 fused_ordering(12) 00:14:25.224 fused_ordering(13) 00:14:25.224 fused_ordering(14) 00:14:25.224 fused_ordering(15) 00:14:25.224 fused_ordering(16) 00:14:25.224 fused_ordering(17) 00:14:25.224 fused_ordering(18) 00:14:25.224 fused_ordering(19) 00:14:25.224 fused_ordering(20) 00:14:25.224 fused_ordering(21) 00:14:25.224 fused_ordering(22) 00:14:25.224 fused_ordering(23) 00:14:25.224 fused_ordering(24) 00:14:25.224 fused_ordering(25) 00:14:25.224 fused_ordering(26) 00:14:25.224 fused_ordering(27) 00:14:25.224 fused_ordering(28) 00:14:25.224 fused_ordering(29) 00:14:25.224 fused_ordering(30) 00:14:25.224 fused_ordering(31) 00:14:25.224 fused_ordering(32) 00:14:25.224 fused_ordering(33) 00:14:25.224 fused_ordering(34) 00:14:25.224 fused_ordering(35) 00:14:25.224 fused_ordering(36) 00:14:25.224 fused_ordering(37) 00:14:25.224 fused_ordering(38) 00:14:25.224 fused_ordering(39) 00:14:25.224 fused_ordering(40) 00:14:25.224 fused_ordering(41) 00:14:25.224 fused_ordering(42) 00:14:25.224 fused_ordering(43) 00:14:25.224 fused_ordering(44) 00:14:25.224 fused_ordering(45) 00:14:25.224 fused_ordering(46) 00:14:25.224 fused_ordering(47) 00:14:25.225 fused_ordering(48) 00:14:25.225 fused_ordering(49) 00:14:25.225 fused_ordering(50) 00:14:25.225 fused_ordering(51) 00:14:25.225 fused_ordering(52) 00:14:25.225 fused_ordering(53) 00:14:25.225 fused_ordering(54) 00:14:25.225 fused_ordering(55) 00:14:25.225 fused_ordering(56) 00:14:25.225 fused_ordering(57) 00:14:25.225 fused_ordering(58) 00:14:25.225 fused_ordering(59) 00:14:25.225 fused_ordering(60) 00:14:25.225 fused_ordering(61) 00:14:25.225 fused_ordering(62) 00:14:25.225 fused_ordering(63) 00:14:25.225 fused_ordering(64) 00:14:25.225 fused_ordering(65) 00:14:25.225 fused_ordering(66) 00:14:25.225 fused_ordering(67) 00:14:25.225 fused_ordering(68) 00:14:25.225 fused_ordering(69) 00:14:25.225 fused_ordering(70) 00:14:25.225 fused_ordering(71) 00:14:25.225 fused_ordering(72) 00:14:25.225 fused_ordering(73) 00:14:25.225 fused_ordering(74) 00:14:25.225 fused_ordering(75) 00:14:25.225 fused_ordering(76) 00:14:25.225 fused_ordering(77) 00:14:25.225 fused_ordering(78) 00:14:25.225 fused_ordering(79) 00:14:25.225 fused_ordering(80) 00:14:25.225 fused_ordering(81) 00:14:25.225 fused_ordering(82) 00:14:25.225 fused_ordering(83) 00:14:25.225 fused_ordering(84) 00:14:25.225 fused_ordering(85) 00:14:25.225 fused_ordering(86) 00:14:25.225 fused_ordering(87) 00:14:25.225 fused_ordering(88) 00:14:25.225 fused_ordering(89) 00:14:25.225 fused_ordering(90) 00:14:25.225 fused_ordering(91) 00:14:25.225 fused_ordering(92) 00:14:25.225 fused_ordering(93) 00:14:25.225 fused_ordering(94) 00:14:25.225 fused_ordering(95) 00:14:25.225 fused_ordering(96) 00:14:25.225 fused_ordering(97) 00:14:25.225 fused_ordering(98) 00:14:25.225 fused_ordering(99) 00:14:25.225 fused_ordering(100) 00:14:25.225 fused_ordering(101) 00:14:25.225 fused_ordering(102) 00:14:25.225 fused_ordering(103) 00:14:25.225 fused_ordering(104) 00:14:25.225 fused_ordering(105) 00:14:25.225 fused_ordering(106) 00:14:25.225 fused_ordering(107) 00:14:25.225 fused_ordering(108) 00:14:25.225 fused_ordering(109) 00:14:25.225 fused_ordering(110) 00:14:25.225 fused_ordering(111) 00:14:25.225 fused_ordering(112) 00:14:25.225 fused_ordering(113) 00:14:25.225 fused_ordering(114) 00:14:25.225 fused_ordering(115) 00:14:25.225 fused_ordering(116) 00:14:25.225 fused_ordering(117) 00:14:25.225 fused_ordering(118) 00:14:25.225 fused_ordering(119) 00:14:25.225 fused_ordering(120) 00:14:25.225 fused_ordering(121) 00:14:25.225 fused_ordering(122) 00:14:25.225 fused_ordering(123) 00:14:25.225 fused_ordering(124) 00:14:25.225 fused_ordering(125) 00:14:25.225 fused_ordering(126) 00:14:25.225 fused_ordering(127) 00:14:25.225 fused_ordering(128) 00:14:25.225 fused_ordering(129) 00:14:25.225 fused_ordering(130) 00:14:25.225 fused_ordering(131) 00:14:25.225 fused_ordering(132) 00:14:25.225 fused_ordering(133) 00:14:25.225 fused_ordering(134) 00:14:25.225 fused_ordering(135) 00:14:25.225 fused_ordering(136) 00:14:25.225 fused_ordering(137) 00:14:25.225 fused_ordering(138) 00:14:25.225 fused_ordering(139) 00:14:25.225 fused_ordering(140) 00:14:25.225 fused_ordering(141) 00:14:25.225 fused_ordering(142) 00:14:25.225 fused_ordering(143) 00:14:25.225 fused_ordering(144) 00:14:25.225 fused_ordering(145) 00:14:25.225 fused_ordering(146) 00:14:25.225 fused_ordering(147) 00:14:25.225 fused_ordering(148) 00:14:25.225 fused_ordering(149) 00:14:25.225 fused_ordering(150) 00:14:25.225 fused_ordering(151) 00:14:25.225 fused_ordering(152) 00:14:25.225 fused_ordering(153) 00:14:25.225 fused_ordering(154) 00:14:25.225 fused_ordering(155) 00:14:25.225 fused_ordering(156) 00:14:25.225 fused_ordering(157) 00:14:25.225 fused_ordering(158) 00:14:25.225 fused_ordering(159) 00:14:25.225 fused_ordering(160) 00:14:25.225 fused_ordering(161) 00:14:25.225 fused_ordering(162) 00:14:25.225 fused_ordering(163) 00:14:25.225 fused_ordering(164) 00:14:25.225 fused_ordering(165) 00:14:25.225 fused_ordering(166) 00:14:25.225 fused_ordering(167) 00:14:25.225 fused_ordering(168) 00:14:25.225 fused_ordering(169) 00:14:25.225 fused_ordering(170) 00:14:25.225 fused_ordering(171) 00:14:25.225 fused_ordering(172) 00:14:25.225 fused_ordering(173) 00:14:25.225 fused_ordering(174) 00:14:25.225 fused_ordering(175) 00:14:25.225 fused_ordering(176) 00:14:25.225 fused_ordering(177) 00:14:25.225 fused_ordering(178) 00:14:25.225 fused_ordering(179) 00:14:25.225 fused_ordering(180) 00:14:25.225 fused_ordering(181) 00:14:25.225 fused_ordering(182) 00:14:25.225 fused_ordering(183) 00:14:25.225 fused_ordering(184) 00:14:25.225 fused_ordering(185) 00:14:25.225 fused_ordering(186) 00:14:25.225 fused_ordering(187) 00:14:25.225 fused_ordering(188) 00:14:25.225 fused_ordering(189) 00:14:25.225 fused_ordering(190) 00:14:25.225 fused_ordering(191) 00:14:25.225 fused_ordering(192) 00:14:25.225 fused_ordering(193) 00:14:25.225 fused_ordering(194) 00:14:25.225 fused_ordering(195) 00:14:25.225 fused_ordering(196) 00:14:25.225 fused_ordering(197) 00:14:25.225 fused_ordering(198) 00:14:25.225 fused_ordering(199) 00:14:25.225 fused_ordering(200) 00:14:25.225 fused_ordering(201) 00:14:25.225 fused_ordering(202) 00:14:25.225 fused_ordering(203) 00:14:25.225 fused_ordering(204) 00:14:25.225 fused_ordering(205) 00:14:25.795 fused_ordering(206) 00:14:25.795 fused_ordering(207) 00:14:25.795 fused_ordering(208) 00:14:25.795 fused_ordering(209) 00:14:25.795 fused_ordering(210) 00:14:25.795 fused_ordering(211) 00:14:25.795 fused_ordering(212) 00:14:25.795 fused_ordering(213) 00:14:25.795 fused_ordering(214) 00:14:25.795 fused_ordering(215) 00:14:25.795 fused_ordering(216) 00:14:25.795 fused_ordering(217) 00:14:25.795 fused_ordering(218) 00:14:25.795 fused_ordering(219) 00:14:25.795 fused_ordering(220) 00:14:25.795 fused_ordering(221) 00:14:25.795 fused_ordering(222) 00:14:25.795 fused_ordering(223) 00:14:25.795 fused_ordering(224) 00:14:25.795 fused_ordering(225) 00:14:25.795 fused_ordering(226) 00:14:25.795 fused_ordering(227) 00:14:25.795 fused_ordering(228) 00:14:25.796 fused_ordering(229) 00:14:25.796 fused_ordering(230) 00:14:25.796 fused_ordering(231) 00:14:25.796 fused_ordering(232) 00:14:25.796 fused_ordering(233) 00:14:25.796 fused_ordering(234) 00:14:25.796 fused_ordering(235) 00:14:25.796 fused_ordering(236) 00:14:25.796 fused_ordering(237) 00:14:25.796 fused_ordering(238) 00:14:25.796 fused_ordering(239) 00:14:25.796 fused_ordering(240) 00:14:25.796 fused_ordering(241) 00:14:25.796 fused_ordering(242) 00:14:25.796 fused_ordering(243) 00:14:25.796 fused_ordering(244) 00:14:25.796 fused_ordering(245) 00:14:25.796 fused_ordering(246) 00:14:25.796 fused_ordering(247) 00:14:25.796 fused_ordering(248) 00:14:25.796 fused_ordering(249) 00:14:25.796 fused_ordering(250) 00:14:25.796 fused_ordering(251) 00:14:25.796 fused_ordering(252) 00:14:25.796 fused_ordering(253) 00:14:25.796 fused_ordering(254) 00:14:25.796 fused_ordering(255) 00:14:25.796 fused_ordering(256) 00:14:25.796 fused_ordering(257) 00:14:25.796 fused_ordering(258) 00:14:25.796 fused_ordering(259) 00:14:25.796 fused_ordering(260) 00:14:25.796 fused_ordering(261) 00:14:25.796 fused_ordering(262) 00:14:25.796 fused_ordering(263) 00:14:25.796 fused_ordering(264) 00:14:25.796 fused_ordering(265) 00:14:25.796 fused_ordering(266) 00:14:25.796 fused_ordering(267) 00:14:25.796 fused_ordering(268) 00:14:25.796 fused_ordering(269) 00:14:25.796 fused_ordering(270) 00:14:25.796 fused_ordering(271) 00:14:25.796 fused_ordering(272) 00:14:25.796 fused_ordering(273) 00:14:25.796 fused_ordering(274) 00:14:25.796 fused_ordering(275) 00:14:25.796 fused_ordering(276) 00:14:25.796 fused_ordering(277) 00:14:25.796 fused_ordering(278) 00:14:25.796 fused_ordering(279) 00:14:25.796 fused_ordering(280) 00:14:25.796 fused_ordering(281) 00:14:25.796 fused_ordering(282) 00:14:25.796 fused_ordering(283) 00:14:25.796 fused_ordering(284) 00:14:25.796 fused_ordering(285) 00:14:25.796 fused_ordering(286) 00:14:25.796 fused_ordering(287) 00:14:25.796 fused_ordering(288) 00:14:25.796 fused_ordering(289) 00:14:25.796 fused_ordering(290) 00:14:25.796 fused_ordering(291) 00:14:25.796 fused_ordering(292) 00:14:25.796 fused_ordering(293) 00:14:25.796 fused_ordering(294) 00:14:25.796 fused_ordering(295) 00:14:25.796 fused_ordering(296) 00:14:25.796 fused_ordering(297) 00:14:25.796 fused_ordering(298) 00:14:25.796 fused_ordering(299) 00:14:25.796 fused_ordering(300) 00:14:25.796 fused_ordering(301) 00:14:25.796 fused_ordering(302) 00:14:25.796 fused_ordering(303) 00:14:25.796 fused_ordering(304) 00:14:25.796 fused_ordering(305) 00:14:25.796 fused_ordering(306) 00:14:25.796 fused_ordering(307) 00:14:25.796 fused_ordering(308) 00:14:25.796 fused_ordering(309) 00:14:25.796 fused_ordering(310) 00:14:25.796 fused_ordering(311) 00:14:25.796 fused_ordering(312) 00:14:25.796 fused_ordering(313) 00:14:25.796 fused_ordering(314) 00:14:25.796 fused_ordering(315) 00:14:25.796 fused_ordering(316) 00:14:25.796 fused_ordering(317) 00:14:25.796 fused_ordering(318) 00:14:25.796 fused_ordering(319) 00:14:25.796 fused_ordering(320) 00:14:25.796 fused_ordering(321) 00:14:25.796 fused_ordering(322) 00:14:25.796 fused_ordering(323) 00:14:25.796 fused_ordering(324) 00:14:25.796 fused_ordering(325) 00:14:25.796 fused_ordering(326) 00:14:25.796 fused_ordering(327) 00:14:25.796 fused_ordering(328) 00:14:25.796 fused_ordering(329) 00:14:25.796 fused_ordering(330) 00:14:25.796 fused_ordering(331) 00:14:25.796 fused_ordering(332) 00:14:25.796 fused_ordering(333) 00:14:25.796 fused_ordering(334) 00:14:25.796 fused_ordering(335) 00:14:25.796 fused_ordering(336) 00:14:25.796 fused_ordering(337) 00:14:25.796 fused_ordering(338) 00:14:25.796 fused_ordering(339) 00:14:25.796 fused_ordering(340) 00:14:25.796 fused_ordering(341) 00:14:25.796 fused_ordering(342) 00:14:25.796 fused_ordering(343) 00:14:25.796 fused_ordering(344) 00:14:25.796 fused_ordering(345) 00:14:25.796 fused_ordering(346) 00:14:25.796 fused_ordering(347) 00:14:25.796 fused_ordering(348) 00:14:25.796 fused_ordering(349) 00:14:25.796 fused_ordering(350) 00:14:25.796 fused_ordering(351) 00:14:25.796 fused_ordering(352) 00:14:25.796 fused_ordering(353) 00:14:25.796 fused_ordering(354) 00:14:25.796 fused_ordering(355) 00:14:25.796 fused_ordering(356) 00:14:25.796 fused_ordering(357) 00:14:25.796 fused_ordering(358) 00:14:25.796 fused_ordering(359) 00:14:25.796 fused_ordering(360) 00:14:25.796 fused_ordering(361) 00:14:25.796 fused_ordering(362) 00:14:25.796 fused_ordering(363) 00:14:25.796 fused_ordering(364) 00:14:25.796 fused_ordering(365) 00:14:25.796 fused_ordering(366) 00:14:25.796 fused_ordering(367) 00:14:25.796 fused_ordering(368) 00:14:25.796 fused_ordering(369) 00:14:25.796 fused_ordering(370) 00:14:25.796 fused_ordering(371) 00:14:25.796 fused_ordering(372) 00:14:25.796 fused_ordering(373) 00:14:25.796 fused_ordering(374) 00:14:25.796 fused_ordering(375) 00:14:25.796 fused_ordering(376) 00:14:25.796 fused_ordering(377) 00:14:25.796 fused_ordering(378) 00:14:25.796 fused_ordering(379) 00:14:25.796 fused_ordering(380) 00:14:25.796 fused_ordering(381) 00:14:25.796 fused_ordering(382) 00:14:25.796 fused_ordering(383) 00:14:25.796 fused_ordering(384) 00:14:25.796 fused_ordering(385) 00:14:25.796 fused_ordering(386) 00:14:25.796 fused_ordering(387) 00:14:25.796 fused_ordering(388) 00:14:25.796 fused_ordering(389) 00:14:25.796 fused_ordering(390) 00:14:25.796 fused_ordering(391) 00:14:25.796 fused_ordering(392) 00:14:25.796 fused_ordering(393) 00:14:25.796 fused_ordering(394) 00:14:25.796 fused_ordering(395) 00:14:25.796 fused_ordering(396) 00:14:25.796 fused_ordering(397) 00:14:25.796 fused_ordering(398) 00:14:25.796 fused_ordering(399) 00:14:25.796 fused_ordering(400) 00:14:25.796 fused_ordering(401) 00:14:25.796 fused_ordering(402) 00:14:25.796 fused_ordering(403) 00:14:25.796 fused_ordering(404) 00:14:25.796 fused_ordering(405) 00:14:25.796 fused_ordering(406) 00:14:25.796 fused_ordering(407) 00:14:25.796 fused_ordering(408) 00:14:25.796 fused_ordering(409) 00:14:25.796 fused_ordering(410) 00:14:26.366 fused_ordering(411) 00:14:26.366 fused_ordering(412) 00:14:26.366 fused_ordering(413) 00:14:26.366 fused_ordering(414) 00:14:26.366 fused_ordering(415) 00:14:26.366 fused_ordering(416) 00:14:26.366 fused_ordering(417) 00:14:26.366 fused_ordering(418) 00:14:26.366 fused_ordering(419) 00:14:26.366 fused_ordering(420) 00:14:26.366 fused_ordering(421) 00:14:26.366 fused_ordering(422) 00:14:26.366 fused_ordering(423) 00:14:26.366 fused_ordering(424) 00:14:26.366 fused_ordering(425) 00:14:26.366 fused_ordering(426) 00:14:26.366 fused_ordering(427) 00:14:26.366 fused_ordering(428) 00:14:26.366 fused_ordering(429) 00:14:26.366 fused_ordering(430) 00:14:26.366 fused_ordering(431) 00:14:26.366 fused_ordering(432) 00:14:26.366 fused_ordering(433) 00:14:26.366 fused_ordering(434) 00:14:26.366 fused_ordering(435) 00:14:26.366 fused_ordering(436) 00:14:26.366 fused_ordering(437) 00:14:26.366 fused_ordering(438) 00:14:26.366 fused_ordering(439) 00:14:26.366 fused_ordering(440) 00:14:26.366 fused_ordering(441) 00:14:26.366 fused_ordering(442) 00:14:26.366 fused_ordering(443) 00:14:26.366 fused_ordering(444) 00:14:26.366 fused_ordering(445) 00:14:26.366 fused_ordering(446) 00:14:26.366 fused_ordering(447) 00:14:26.366 fused_ordering(448) 00:14:26.366 fused_ordering(449) 00:14:26.366 fused_ordering(450) 00:14:26.366 fused_ordering(451) 00:14:26.366 fused_ordering(452) 00:14:26.366 fused_ordering(453) 00:14:26.366 fused_ordering(454) 00:14:26.366 fused_ordering(455) 00:14:26.366 fused_ordering(456) 00:14:26.366 fused_ordering(457) 00:14:26.366 fused_ordering(458) 00:14:26.366 fused_ordering(459) 00:14:26.366 fused_ordering(460) 00:14:26.366 fused_ordering(461) 00:14:26.366 fused_ordering(462) 00:14:26.366 fused_ordering(463) 00:14:26.366 fused_ordering(464) 00:14:26.366 fused_ordering(465) 00:14:26.366 fused_ordering(466) 00:14:26.366 fused_ordering(467) 00:14:26.366 fused_ordering(468) 00:14:26.366 fused_ordering(469) 00:14:26.366 fused_ordering(470) 00:14:26.366 fused_ordering(471) 00:14:26.366 fused_ordering(472) 00:14:26.366 fused_ordering(473) 00:14:26.366 fused_ordering(474) 00:14:26.366 fused_ordering(475) 00:14:26.366 fused_ordering(476) 00:14:26.366 fused_ordering(477) 00:14:26.366 fused_ordering(478) 00:14:26.366 fused_ordering(479) 00:14:26.366 fused_ordering(480) 00:14:26.366 fused_ordering(481) 00:14:26.366 fused_ordering(482) 00:14:26.366 fused_ordering(483) 00:14:26.366 fused_ordering(484) 00:14:26.366 fused_ordering(485) 00:14:26.366 fused_ordering(486) 00:14:26.366 fused_ordering(487) 00:14:26.366 fused_ordering(488) 00:14:26.366 fused_ordering(489) 00:14:26.366 fused_ordering(490) 00:14:26.366 fused_ordering(491) 00:14:26.366 fused_ordering(492) 00:14:26.366 fused_ordering(493) 00:14:26.366 fused_ordering(494) 00:14:26.366 fused_ordering(495) 00:14:26.366 fused_ordering(496) 00:14:26.366 fused_ordering(497) 00:14:26.366 fused_ordering(498) 00:14:26.366 fused_ordering(499) 00:14:26.366 fused_ordering(500) 00:14:26.366 fused_ordering(501) 00:14:26.366 fused_ordering(502) 00:14:26.366 fused_ordering(503) 00:14:26.366 fused_ordering(504) 00:14:26.366 fused_ordering(505) 00:14:26.366 fused_ordering(506) 00:14:26.366 fused_ordering(507) 00:14:26.366 fused_ordering(508) 00:14:26.366 fused_ordering(509) 00:14:26.366 fused_ordering(510) 00:14:26.366 fused_ordering(511) 00:14:26.366 fused_ordering(512) 00:14:26.366 fused_ordering(513) 00:14:26.366 fused_ordering(514) 00:14:26.366 fused_ordering(515) 00:14:26.366 fused_ordering(516) 00:14:26.366 fused_ordering(517) 00:14:26.366 fused_ordering(518) 00:14:26.366 fused_ordering(519) 00:14:26.366 fused_ordering(520) 00:14:26.366 fused_ordering(521) 00:14:26.366 fused_ordering(522) 00:14:26.366 fused_ordering(523) 00:14:26.366 fused_ordering(524) 00:14:26.366 fused_ordering(525) 00:14:26.366 fused_ordering(526) 00:14:26.366 fused_ordering(527) 00:14:26.366 fused_ordering(528) 00:14:26.366 fused_ordering(529) 00:14:26.366 fused_ordering(530) 00:14:26.366 fused_ordering(531) 00:14:26.366 fused_ordering(532) 00:14:26.366 fused_ordering(533) 00:14:26.366 fused_ordering(534) 00:14:26.366 fused_ordering(535) 00:14:26.366 fused_ordering(536) 00:14:26.366 fused_ordering(537) 00:14:26.366 fused_ordering(538) 00:14:26.366 fused_ordering(539) 00:14:26.366 fused_ordering(540) 00:14:26.366 fused_ordering(541) 00:14:26.366 fused_ordering(542) 00:14:26.366 fused_ordering(543) 00:14:26.366 fused_ordering(544) 00:14:26.366 fused_ordering(545) 00:14:26.366 fused_ordering(546) 00:14:26.366 fused_ordering(547) 00:14:26.366 fused_ordering(548) 00:14:26.366 fused_ordering(549) 00:14:26.366 fused_ordering(550) 00:14:26.366 fused_ordering(551) 00:14:26.366 fused_ordering(552) 00:14:26.366 fused_ordering(553) 00:14:26.366 fused_ordering(554) 00:14:26.366 fused_ordering(555) 00:14:26.366 fused_ordering(556) 00:14:26.366 fused_ordering(557) 00:14:26.366 fused_ordering(558) 00:14:26.366 fused_ordering(559) 00:14:26.366 fused_ordering(560) 00:14:26.366 fused_ordering(561) 00:14:26.366 fused_ordering(562) 00:14:26.366 fused_ordering(563) 00:14:26.366 fused_ordering(564) 00:14:26.366 fused_ordering(565) 00:14:26.366 fused_ordering(566) 00:14:26.366 fused_ordering(567) 00:14:26.366 fused_ordering(568) 00:14:26.366 fused_ordering(569) 00:14:26.366 fused_ordering(570) 00:14:26.366 fused_ordering(571) 00:14:26.366 fused_ordering(572) 00:14:26.366 fused_ordering(573) 00:14:26.366 fused_ordering(574) 00:14:26.366 fused_ordering(575) 00:14:26.366 fused_ordering(576) 00:14:26.366 fused_ordering(577) 00:14:26.366 fused_ordering(578) 00:14:26.366 fused_ordering(579) 00:14:26.366 fused_ordering(580) 00:14:26.366 fused_ordering(581) 00:14:26.366 fused_ordering(582) 00:14:26.366 fused_ordering(583) 00:14:26.366 fused_ordering(584) 00:14:26.366 fused_ordering(585) 00:14:26.366 fused_ordering(586) 00:14:26.366 fused_ordering(587) 00:14:26.366 fused_ordering(588) 00:14:26.366 fused_ordering(589) 00:14:26.366 fused_ordering(590) 00:14:26.366 fused_ordering(591) 00:14:26.366 fused_ordering(592) 00:14:26.366 fused_ordering(593) 00:14:26.366 fused_ordering(594) 00:14:26.366 fused_ordering(595) 00:14:26.366 fused_ordering(596) 00:14:26.366 fused_ordering(597) 00:14:26.366 fused_ordering(598) 00:14:26.366 fused_ordering(599) 00:14:26.366 fused_ordering(600) 00:14:26.366 fused_ordering(601) 00:14:26.366 fused_ordering(602) 00:14:26.366 fused_ordering(603) 00:14:26.366 fused_ordering(604) 00:14:26.366 fused_ordering(605) 00:14:26.366 fused_ordering(606) 00:14:26.366 fused_ordering(607) 00:14:26.366 fused_ordering(608) 00:14:26.367 fused_ordering(609) 00:14:26.367 fused_ordering(610) 00:14:26.367 fused_ordering(611) 00:14:26.367 fused_ordering(612) 00:14:26.367 fused_ordering(613) 00:14:26.367 fused_ordering(614) 00:14:26.367 fused_ordering(615) 00:14:26.626 fused_ordering(616) 00:14:26.627 fused_ordering(617) 00:14:26.627 fused_ordering(618) 00:14:26.627 fused_ordering(619) 00:14:26.627 fused_ordering(620) 00:14:26.627 fused_ordering(621) 00:14:26.627 fused_ordering(622) 00:14:26.627 fused_ordering(623) 00:14:26.627 fused_ordering(624) 00:14:26.627 fused_ordering(625) 00:14:26.627 fused_ordering(626) 00:14:26.627 fused_ordering(627) 00:14:26.627 fused_ordering(628) 00:14:26.627 fused_ordering(629) 00:14:26.627 fused_ordering(630) 00:14:26.627 fused_ordering(631) 00:14:26.627 fused_ordering(632) 00:14:26.627 fused_ordering(633) 00:14:26.627 fused_ordering(634) 00:14:26.627 fused_ordering(635) 00:14:26.627 fused_ordering(636) 00:14:26.627 fused_ordering(637) 00:14:26.627 fused_ordering(638) 00:14:26.627 fused_ordering(639) 00:14:26.627 fused_ordering(640) 00:14:26.627 fused_ordering(641) 00:14:26.627 fused_ordering(642) 00:14:26.627 fused_ordering(643) 00:14:26.627 fused_ordering(644) 00:14:26.627 fused_ordering(645) 00:14:26.627 fused_ordering(646) 00:14:26.627 fused_ordering(647) 00:14:26.627 fused_ordering(648) 00:14:26.627 fused_ordering(649) 00:14:26.627 fused_ordering(650) 00:14:26.627 fused_ordering(651) 00:14:26.627 fused_ordering(652) 00:14:26.627 fused_ordering(653) 00:14:26.627 fused_ordering(654) 00:14:26.627 fused_ordering(655) 00:14:26.627 fused_ordering(656) 00:14:26.627 fused_ordering(657) 00:14:26.627 fused_ordering(658) 00:14:26.627 fused_ordering(659) 00:14:26.627 fused_ordering(660) 00:14:26.627 fused_ordering(661) 00:14:26.627 fused_ordering(662) 00:14:26.627 fused_ordering(663) 00:14:26.627 fused_ordering(664) 00:14:26.627 fused_ordering(665) 00:14:26.627 fused_ordering(666) 00:14:26.627 fused_ordering(667) 00:14:26.627 fused_ordering(668) 00:14:26.627 fused_ordering(669) 00:14:26.627 fused_ordering(670) 00:14:26.627 fused_ordering(671) 00:14:26.627 fused_ordering(672) 00:14:26.627 fused_ordering(673) 00:14:26.627 fused_ordering(674) 00:14:26.627 fused_ordering(675) 00:14:26.627 fused_ordering(676) 00:14:26.627 fused_ordering(677) 00:14:26.627 fused_ordering(678) 00:14:26.627 fused_ordering(679) 00:14:26.627 fused_ordering(680) 00:14:26.627 fused_ordering(681) 00:14:26.627 fused_ordering(682) 00:14:26.627 fused_ordering(683) 00:14:26.627 fused_ordering(684) 00:14:26.627 fused_ordering(685) 00:14:26.627 fused_ordering(686) 00:14:26.627 fused_ordering(687) 00:14:26.627 fused_ordering(688) 00:14:26.627 fused_ordering(689) 00:14:26.627 fused_ordering(690) 00:14:26.627 fused_ordering(691) 00:14:26.627 fused_ordering(692) 00:14:26.627 fused_ordering(693) 00:14:26.627 fused_ordering(694) 00:14:26.627 fused_ordering(695) 00:14:26.627 fused_ordering(696) 00:14:26.627 fused_ordering(697) 00:14:26.627 fused_ordering(698) 00:14:26.627 fused_ordering(699) 00:14:26.627 fused_ordering(700) 00:14:26.627 fused_ordering(701) 00:14:26.627 fused_ordering(702) 00:14:26.627 fused_ordering(703) 00:14:26.627 fused_ordering(704) 00:14:26.627 fused_ordering(705) 00:14:26.627 fused_ordering(706) 00:14:26.627 fused_ordering(707) 00:14:26.627 fused_ordering(708) 00:14:26.627 fused_ordering(709) 00:14:26.627 fused_ordering(710) 00:14:26.627 fused_ordering(711) 00:14:26.627 fused_ordering(712) 00:14:26.627 fused_ordering(713) 00:14:26.627 fused_ordering(714) 00:14:26.627 fused_ordering(715) 00:14:26.627 fused_ordering(716) 00:14:26.627 fused_ordering(717) 00:14:26.627 fused_ordering(718) 00:14:26.627 fused_ordering(719) 00:14:26.627 fused_ordering(720) 00:14:26.627 fused_ordering(721) 00:14:26.627 fused_ordering(722) 00:14:26.627 fused_ordering(723) 00:14:26.627 fused_ordering(724) 00:14:26.627 fused_ordering(725) 00:14:26.627 fused_ordering(726) 00:14:26.627 fused_ordering(727) 00:14:26.627 fused_ordering(728) 00:14:26.627 fused_ordering(729) 00:14:26.627 fused_ordering(730) 00:14:26.627 fused_ordering(731) 00:14:26.627 fused_ordering(732) 00:14:26.627 fused_ordering(733) 00:14:26.627 fused_ordering(734) 00:14:26.627 fused_ordering(735) 00:14:26.627 fused_ordering(736) 00:14:26.627 fused_ordering(737) 00:14:26.627 fused_ordering(738) 00:14:26.627 fused_ordering(739) 00:14:26.627 fused_ordering(740) 00:14:26.627 fused_ordering(741) 00:14:26.627 fused_ordering(742) 00:14:26.627 fused_ordering(743) 00:14:26.627 fused_ordering(744) 00:14:26.627 fused_ordering(745) 00:14:26.627 fused_ordering(746) 00:14:26.627 fused_ordering(747) 00:14:26.627 fused_ordering(748) 00:14:26.627 fused_ordering(749) 00:14:26.627 fused_ordering(750) 00:14:26.627 fused_ordering(751) 00:14:26.627 fused_ordering(752) 00:14:26.627 fused_ordering(753) 00:14:26.627 fused_ordering(754) 00:14:26.627 fused_ordering(755) 00:14:26.627 fused_ordering(756) 00:14:26.627 fused_ordering(757) 00:14:26.627 fused_ordering(758) 00:14:26.627 fused_ordering(759) 00:14:26.627 fused_ordering(760) 00:14:26.627 fused_ordering(761) 00:14:26.627 fused_ordering(762) 00:14:26.627 fused_ordering(763) 00:14:26.627 fused_ordering(764) 00:14:26.627 fused_ordering(765) 00:14:26.627 fused_ordering(766) 00:14:26.627 fused_ordering(767) 00:14:26.627 fused_ordering(768) 00:14:26.627 fused_ordering(769) 00:14:26.627 fused_ordering(770) 00:14:26.627 fused_ordering(771) 00:14:26.627 fused_ordering(772) 00:14:26.627 fused_ordering(773) 00:14:26.627 fused_ordering(774) 00:14:26.627 fused_ordering(775) 00:14:26.627 fused_ordering(776) 00:14:26.627 fused_ordering(777) 00:14:26.627 fused_ordering(778) 00:14:26.627 fused_ordering(779) 00:14:26.627 fused_ordering(780) 00:14:26.627 fused_ordering(781) 00:14:26.627 fused_ordering(782) 00:14:26.627 fused_ordering(783) 00:14:26.627 fused_ordering(784) 00:14:26.627 fused_ordering(785) 00:14:26.627 fused_ordering(786) 00:14:26.627 fused_ordering(787) 00:14:26.627 fused_ordering(788) 00:14:26.627 fused_ordering(789) 00:14:26.627 fused_ordering(790) 00:14:26.627 fused_ordering(791) 00:14:26.627 fused_ordering(792) 00:14:26.627 fused_ordering(793) 00:14:26.627 fused_ordering(794) 00:14:26.627 fused_ordering(795) 00:14:26.627 fused_ordering(796) 00:14:26.627 fused_ordering(797) 00:14:26.627 fused_ordering(798) 00:14:26.627 fused_ordering(799) 00:14:26.627 fused_ordering(800) 00:14:26.627 fused_ordering(801) 00:14:26.627 fused_ordering(802) 00:14:26.627 fused_ordering(803) 00:14:26.627 fused_ordering(804) 00:14:26.627 fused_ordering(805) 00:14:26.627 fused_ordering(806) 00:14:26.627 fused_ordering(807) 00:14:26.627 fused_ordering(808) 00:14:26.627 fused_ordering(809) 00:14:26.627 fused_ordering(810) 00:14:26.627 fused_ordering(811) 00:14:26.627 fused_ordering(812) 00:14:26.627 fused_ordering(813) 00:14:26.627 fused_ordering(814) 00:14:26.627 fused_ordering(815) 00:14:26.627 fused_ordering(816) 00:14:26.627 fused_ordering(817) 00:14:26.627 fused_ordering(818) 00:14:26.627 fused_ordering(819) 00:14:26.627 fused_ordering(820) 00:14:27.197 fused_ordering(821) 00:14:27.197 fused_ordering(822) 00:14:27.198 fused_ordering(823) 00:14:27.198 fused_ordering(824) 00:14:27.198 fused_ordering(825) 00:14:27.198 fused_ordering(826) 00:14:27.198 fused_ordering(827) 00:14:27.198 fused_ordering(828) 00:14:27.198 fused_ordering(829) 00:14:27.198 fused_ordering(830) 00:14:27.198 fused_ordering(831) 00:14:27.198 fused_ordering(832) 00:14:27.198 fused_ordering(833) 00:14:27.198 fused_ordering(834) 00:14:27.198 fused_ordering(835) 00:14:27.198 fused_ordering(836) 00:14:27.198 fused_ordering(837) 00:14:27.198 fused_ordering(838) 00:14:27.198 fused_ordering(839) 00:14:27.198 fused_ordering(840) 00:14:27.198 fused_ordering(841) 00:14:27.198 fused_ordering(842) 00:14:27.198 fused_ordering(843) 00:14:27.198 fused_ordering(844) 00:14:27.198 fused_ordering(845) 00:14:27.198 fused_ordering(846) 00:14:27.198 fused_ordering(847) 00:14:27.198 fused_ordering(848) 00:14:27.198 fused_ordering(849) 00:14:27.198 fused_ordering(850) 00:14:27.198 fused_ordering(851) 00:14:27.198 fused_ordering(852) 00:14:27.198 fused_ordering(853) 00:14:27.198 fused_ordering(854) 00:14:27.198 fused_ordering(855) 00:14:27.198 fused_ordering(856) 00:14:27.198 fused_ordering(857) 00:14:27.198 fused_ordering(858) 00:14:27.198 fused_ordering(859) 00:14:27.198 fused_ordering(860) 00:14:27.198 fused_ordering(861) 00:14:27.198 fused_ordering(862) 00:14:27.198 fused_ordering(863) 00:14:27.198 fused_ordering(864) 00:14:27.198 fused_ordering(865) 00:14:27.198 fused_ordering(866) 00:14:27.198 fused_ordering(867) 00:14:27.198 fused_ordering(868) 00:14:27.198 fused_ordering(869) 00:14:27.198 fused_ordering(870) 00:14:27.198 fused_ordering(871) 00:14:27.198 fused_ordering(872) 00:14:27.198 fused_ordering(873) 00:14:27.198 fused_ordering(874) 00:14:27.198 fused_ordering(875) 00:14:27.198 fused_ordering(876) 00:14:27.198 fused_ordering(877) 00:14:27.198 fused_ordering(878) 00:14:27.198 fused_ordering(879) 00:14:27.198 fused_ordering(880) 00:14:27.198 fused_ordering(881) 00:14:27.198 fused_ordering(882) 00:14:27.198 fused_ordering(883) 00:14:27.198 fused_ordering(884) 00:14:27.198 fused_ordering(885) 00:14:27.198 fused_ordering(886) 00:14:27.198 fused_ordering(887) 00:14:27.198 fused_ordering(888) 00:14:27.198 fused_ordering(889) 00:14:27.198 fused_ordering(890) 00:14:27.198 fused_ordering(891) 00:14:27.198 fused_ordering(892) 00:14:27.198 fused_ordering(893) 00:14:27.198 fused_ordering(894) 00:14:27.198 fused_ordering(895) 00:14:27.198 fused_ordering(896) 00:14:27.198 fused_ordering(897) 00:14:27.198 fused_ordering(898) 00:14:27.198 fused_ordering(899) 00:14:27.198 fused_ordering(900) 00:14:27.198 fused_ordering(901) 00:14:27.198 fused_ordering(902) 00:14:27.198 fused_ordering(903) 00:14:27.198 fused_ordering(904) 00:14:27.198 fused_ordering(905) 00:14:27.198 fused_ordering(906) 00:14:27.198 fused_ordering(907) 00:14:27.198 fused_ordering(908) 00:14:27.198 fused_ordering(909) 00:14:27.198 fused_ordering(910) 00:14:27.198 fused_ordering(911) 00:14:27.198 fused_ordering(912) 00:14:27.198 fused_ordering(913) 00:14:27.198 fused_ordering(914) 00:14:27.198 fused_ordering(915) 00:14:27.198 fused_ordering(916) 00:14:27.198 fused_ordering(917) 00:14:27.198 fused_ordering(918) 00:14:27.198 fused_ordering(919) 00:14:27.198 fused_ordering(920) 00:14:27.198 fused_ordering(921) 00:14:27.198 fused_ordering(922) 00:14:27.198 fused_ordering(923) 00:14:27.198 fused_ordering(924) 00:14:27.198 fused_ordering(925) 00:14:27.198 fused_ordering(926) 00:14:27.198 fused_ordering(927) 00:14:27.198 fused_ordering(928) 00:14:27.198 fused_ordering(929) 00:14:27.198 fused_ordering(930) 00:14:27.198 fused_ordering(931) 00:14:27.198 fused_ordering(932) 00:14:27.198 fused_ordering(933) 00:14:27.198 fused_ordering(934) 00:14:27.198 fused_ordering(935) 00:14:27.198 fused_ordering(936) 00:14:27.198 fused_ordering(937) 00:14:27.198 fused_ordering(938) 00:14:27.198 fused_ordering(939) 00:14:27.198 fused_ordering(940) 00:14:27.198 fused_ordering(941) 00:14:27.198 fused_ordering(942) 00:14:27.198 fused_ordering(943) 00:14:27.198 fused_ordering(944) 00:14:27.198 fused_ordering(945) 00:14:27.198 fused_ordering(946) 00:14:27.198 fused_ordering(947) 00:14:27.198 fused_ordering(948) 00:14:27.198 fused_ordering(949) 00:14:27.198 fused_ordering(950) 00:14:27.198 fused_ordering(951) 00:14:27.198 fused_ordering(952) 00:14:27.198 fused_ordering(953) 00:14:27.198 fused_ordering(954) 00:14:27.198 fused_ordering(955) 00:14:27.198 fused_ordering(956) 00:14:27.198 fused_ordering(957) 00:14:27.198 fused_ordering(958) 00:14:27.198 fused_ordering(959) 00:14:27.198 fused_ordering(960) 00:14:27.198 fused_ordering(961) 00:14:27.198 fused_ordering(962) 00:14:27.198 fused_ordering(963) 00:14:27.198 fused_ordering(964) 00:14:27.198 fused_ordering(965) 00:14:27.198 fused_ordering(966) 00:14:27.198 fused_ordering(967) 00:14:27.198 fused_ordering(968) 00:14:27.198 fused_ordering(969) 00:14:27.198 fused_ordering(970) 00:14:27.198 fused_ordering(971) 00:14:27.198 fused_ordering(972) 00:14:27.198 fused_ordering(973) 00:14:27.198 fused_ordering(974) 00:14:27.198 fused_ordering(975) 00:14:27.198 fused_ordering(976) 00:14:27.198 fused_ordering(977) 00:14:27.198 fused_ordering(978) 00:14:27.198 fused_ordering(979) 00:14:27.198 fused_ordering(980) 00:14:27.198 fused_ordering(981) 00:14:27.198 fused_ordering(982) 00:14:27.198 fused_ordering(983) 00:14:27.198 fused_ordering(984) 00:14:27.198 fused_ordering(985) 00:14:27.198 fused_ordering(986) 00:14:27.198 fused_ordering(987) 00:14:27.198 fused_ordering(988) 00:14:27.198 fused_ordering(989) 00:14:27.198 fused_ordering(990) 00:14:27.198 fused_ordering(991) 00:14:27.198 fused_ordering(992) 00:14:27.198 fused_ordering(993) 00:14:27.198 fused_ordering(994) 00:14:27.198 fused_ordering(995) 00:14:27.198 fused_ordering(996) 00:14:27.198 fused_ordering(997) 00:14:27.198 fused_ordering(998) 00:14:27.198 fused_ordering(999) 00:14:27.198 fused_ordering(1000) 00:14:27.198 fused_ordering(1001) 00:14:27.198 fused_ordering(1002) 00:14:27.198 fused_ordering(1003) 00:14:27.198 fused_ordering(1004) 00:14:27.198 fused_ordering(1005) 00:14:27.198 fused_ordering(1006) 00:14:27.198 fused_ordering(1007) 00:14:27.198 fused_ordering(1008) 00:14:27.198 fused_ordering(1009) 00:14:27.198 fused_ordering(1010) 00:14:27.198 fused_ordering(1011) 00:14:27.198 fused_ordering(1012) 00:14:27.198 fused_ordering(1013) 00:14:27.198 fused_ordering(1014) 00:14:27.198 fused_ordering(1015) 00:14:27.198 fused_ordering(1016) 00:14:27.198 fused_ordering(1017) 00:14:27.198 fused_ordering(1018) 00:14:27.198 fused_ordering(1019) 00:14:27.198 fused_ordering(1020) 00:14:27.198 fused_ordering(1021) 00:14:27.198 fused_ordering(1022) 00:14:27.198 fused_ordering(1023) 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.198 rmmod nvme_tcp 00:14:27.198 rmmod nvme_fabrics 00:14:27.198 rmmod nvme_keyring 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2637177 ']' 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2637177 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2637177 ']' 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2637177 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.198 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2637177 00:14:27.458 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:27.458 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:27.458 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2637177' 00:14:27.458 killing process with pid 2637177 00:14:27.458 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2637177 00:14:27.458 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2637177 00:14:27.458 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:27.458 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:27.458 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:27.458 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.459 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.003 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.003 00:14:30.003 real 0m12.780s 00:14:30.003 user 0m6.792s 00:14:30.003 sys 0m6.724s 00:14:30.003 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.003 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.003 ************************************ 00:14:30.003 END TEST nvmf_fused_ordering 00:14:30.004 ************************************ 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.004 ************************************ 00:14:30.004 START TEST nvmf_ns_masking 00:14:30.004 ************************************ 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:30.004 * Looking for test storage... 00:14:30.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.004 --rc genhtml_branch_coverage=1 00:14:30.004 --rc genhtml_function_coverage=1 00:14:30.004 --rc genhtml_legend=1 00:14:30.004 --rc geninfo_all_blocks=1 00:14:30.004 --rc geninfo_unexecuted_blocks=1 00:14:30.004 00:14:30.004 ' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.004 --rc genhtml_branch_coverage=1 00:14:30.004 --rc genhtml_function_coverage=1 00:14:30.004 --rc genhtml_legend=1 00:14:30.004 --rc geninfo_all_blocks=1 00:14:30.004 --rc geninfo_unexecuted_blocks=1 00:14:30.004 00:14:30.004 ' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.004 --rc genhtml_branch_coverage=1 00:14:30.004 --rc genhtml_function_coverage=1 00:14:30.004 --rc genhtml_legend=1 00:14:30.004 --rc geninfo_all_blocks=1 00:14:30.004 --rc geninfo_unexecuted_blocks=1 00:14:30.004 00:14:30.004 ' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:30.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.004 --rc genhtml_branch_coverage=1 00:14:30.004 --rc genhtml_function_coverage=1 00:14:30.004 --rc genhtml_legend=1 00:14:30.004 --rc geninfo_all_blocks=1 00:14:30.004 --rc geninfo_unexecuted_blocks=1 00:14:30.004 00:14:30.004 ' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:30.004 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ed1b3a64-0786-4624-868b-57ca1bbd4ebb 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bc0008e4-0769-434d-aa56-72d8f631bb04 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b40184e6-3334-42d0-9053-c037697869fe 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.005 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:38.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:38.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:38.145 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:38.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:38.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:14:38.146 00:14:38.146 --- 10.0.0.2 ping statistics --- 00:14:38.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.146 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:14:38.146 00:14:38.146 --- 10.0.0.1 ping statistics --- 00:14:38.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.146 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2641709 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2641709 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2641709 ']' 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.146 16:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.146 [2024-10-01 16:39:28.809423] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:38.146 [2024-10-01 16:39:28.809487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.146 [2024-10-01 16:39:28.895501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.146 [2024-10-01 16:39:28.986357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.146 [2024-10-01 16:39:28.986413] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.146 [2024-10-01 16:39:28.986421] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.146 [2024-10-01 16:39:28.986428] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.146 [2024-10-01 16:39:28.986434] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.146 [2024-10-01 16:39:28.986459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.146 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.147 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:38.407 [2024-10-01 16:39:29.950810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.407 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:38.407 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:38.407 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.668 Malloc1 00:14:38.668 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.929 Malloc2 00:14:38.929 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.189 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:39.449 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.449 [2024-10-01 16:39:31.090937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.449 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:39.449 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b40184e6-3334-42d0-9053-c037697869fe -a 10.0.0.2 -s 4420 -i 4 00:14:39.709 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.709 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.709 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.709 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:39.709 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.255 [ 0]:0x1 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.255 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4a133a237a94586b35235daf441f1d0 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4a133a237a94586b35235daf441f1d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.256 [ 0]:0x1 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4a133a237a94586b35235daf441f1d0 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4a133a237a94586b35235daf441f1d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.256 [ 1]:0x2 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.256 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.516 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:42.777 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:42.777 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b40184e6-3334-42d0-9053-c037697869fe -a 10.0.0.2 -s 4420 -i 4 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:43.038 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:44.951 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:44.952 [ 0]:0x2 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:44.952 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.213 [ 0]:0x1 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4a133a237a94586b35235daf441f1d0 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4a133a237a94586b35235daf441f1d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.213 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.473 [ 1]:0x2 00:14:45.473 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.473 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.473 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:45.473 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.473 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.473 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.732 [ 0]:0x2 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.732 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:45.992 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b40184e6-3334-42d0-9053-c037697869fe -a 10.0.0.2 -s 4420 -i 4 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:46.251 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.159 [ 0]:0x1 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4a133a237a94586b35235daf441f1d0 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4a133a237a94586b35235daf441f1d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.159 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.159 [ 1]:0x2 00:14:48.160 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.160 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.420 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:48.420 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.420 16:39:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.420 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.680 [ 0]:0x2 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:48.680 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.941 [2024-10-01 16:39:40.377179] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:48.941 request: 00:14:48.941 { 00:14:48.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.941 "nsid": 2, 00:14:48.941 "host": "nqn.2016-06.io.spdk:host1", 00:14:48.941 "method": "nvmf_ns_remove_host", 00:14:48.941 "req_id": 1 00:14:48.941 } 00:14:48.941 Got JSON-RPC error response 00:14:48.941 response: 00:14:48.941 { 00:14:48.941 "code": -32602, 00:14:48.941 "message": "Invalid parameters" 00:14:48.941 } 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.941 [ 0]:0x2 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0fb553c8033488cbb095b6ba3623924 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0fb553c8033488cbb095b6ba3623924 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2643711 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2643711 /var/tmp/host.sock 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2643711 ']' 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:48.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.941 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.941 [2024-10-01 16:39:40.610252] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:14:48.941 [2024-10-01 16:39:40.610302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643711 ] 00:14:49.201 [2024-10-01 16:39:40.661126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.201 [2024-10-01 16:39:40.715766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.771 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.771 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:49.771 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.031 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:50.291 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ed1b3a64-0786-4624-868b-57ca1bbd4ebb 00:14:50.291 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:50.291 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED1B3A6407864624868B57CA1BBD4EBB -i 00:14:50.550 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bc0008e4-0769-434d-aa56-72d8f631bb04 00:14:50.550 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:50.550 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BC0008E40769434DAA5672D8F631BB04 -i 00:14:50.550 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.810 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:51.070 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:51.070 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:51.330 nvme0n1 00:14:51.330 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:51.330 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:51.590 nvme1n2 00:14:51.590 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:51.590 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:51.590 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:51.590 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:51.590 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:51.850 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:51.850 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:51.850 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:51.850 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:52.110 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ed1b3a64-0786-4624-868b-57ca1bbd4ebb == \e\d\1\b\3\a\6\4\-\0\7\8\6\-\4\6\2\4\-\8\6\8\b\-\5\7\c\a\1\b\b\d\4\e\b\b ]] 00:14:52.110 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:52.110 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:52.110 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:52.370 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bc0008e4-0769-434d-aa56-72d8f631bb04 == \b\c\0\0\0\8\e\4\-\0\7\6\9\-\4\3\4\d\-\a\a\5\6\-\7\2\d\8\f\6\3\1\b\b\0\4 ]] 00:14:52.370 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2643711 00:14:52.370 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2643711 ']' 00:14:52.370 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2643711 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2643711 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2643711' 00:14:52.371 killing process with pid 2643711 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2643711 00:14:52.371 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2643711 00:14:52.632 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.892 rmmod nvme_tcp 00:14:52.892 rmmod nvme_fabrics 00:14:52.892 rmmod nvme_keyring 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2641709 ']' 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2641709 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2641709 ']' 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2641709 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2641709 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2641709' 00:14:52.892 killing process with pid 2641709 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2641709 00:14:52.892 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2641709 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.152 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.072 00:14:55.072 real 0m25.535s 00:14:55.072 user 0m26.908s 00:14:55.072 sys 0m7.692s 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:55.072 ************************************ 00:14:55.072 END TEST nvmf_ns_masking 00:14:55.072 ************************************ 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.072 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.367 ************************************ 00:14:55.367 START TEST nvmf_nvme_cli 00:14:55.367 ************************************ 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:55.367 * Looking for test storage... 00:14:55.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.367 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:55.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.367 --rc genhtml_branch_coverage=1 00:14:55.367 --rc genhtml_function_coverage=1 00:14:55.368 --rc genhtml_legend=1 00:14:55.368 --rc geninfo_all_blocks=1 00:14:55.368 --rc geninfo_unexecuted_blocks=1 00:14:55.368 00:14:55.368 ' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:55.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.368 --rc genhtml_branch_coverage=1 00:14:55.368 --rc genhtml_function_coverage=1 00:14:55.368 --rc genhtml_legend=1 00:14:55.368 --rc geninfo_all_blocks=1 00:14:55.368 --rc geninfo_unexecuted_blocks=1 00:14:55.368 00:14:55.368 ' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:55.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.368 --rc genhtml_branch_coverage=1 00:14:55.368 --rc genhtml_function_coverage=1 00:14:55.368 --rc genhtml_legend=1 00:14:55.368 --rc geninfo_all_blocks=1 00:14:55.368 --rc geninfo_unexecuted_blocks=1 00:14:55.368 00:14:55.368 ' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:55.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.368 --rc genhtml_branch_coverage=1 00:14:55.368 --rc genhtml_function_coverage=1 00:14:55.368 --rc genhtml_legend=1 00:14:55.368 --rc geninfo_all_blocks=1 00:14:55.368 --rc geninfo_unexecuted_blocks=1 00:14:55.368 00:14:55.368 ' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:55.368 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.543 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:03.544 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:03.544 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:03.544 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:03.544 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:03.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:15:03.544 00:15:03.544 --- 10.0.0.2 ping statistics --- 00:15:03.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.544 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:15:03.544 00:15:03.544 --- 10.0.0.1 ping statistics --- 00:15:03.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.544 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2648392 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2648392 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2648392 ']' 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.544 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.545 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.545 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.545 [2024-10-01 16:39:54.562033] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:03.545 [2024-10-01 16:39:54.562095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.545 [2024-10-01 16:39:54.649879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.545 [2024-10-01 16:39:54.743316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.545 [2024-10-01 16:39:54.743382] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.545 [2024-10-01 16:39:54.743390] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.545 [2024-10-01 16:39:54.743397] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.545 [2024-10-01 16:39:54.743403] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.545 [2024-10-01 16:39:54.743547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.545 [2024-10-01 16:39:54.743735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.545 [2024-10-01 16:39:54.743682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.545 [2024-10-01 16:39:54.743731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.805 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 [2024-10-01 16:39:55.489600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 Malloc0 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 Malloc1 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 [2024-10-01 16:39:55.575559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:15:04.066 00:15:04.066 Discovery Log Number of Records 2, Generation counter 2 00:15:04.066 =====Discovery Log Entry 0====== 00:15:04.066 trtype: tcp 00:15:04.066 adrfam: ipv4 00:15:04.066 subtype: current discovery subsystem 00:15:04.066 treq: not required 00:15:04.066 portid: 0 00:15:04.066 trsvcid: 4420 00:15:04.066 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:04.066 traddr: 10.0.0.2 00:15:04.066 eflags: explicit discovery connections, duplicate discovery information 00:15:04.066 sectype: none 00:15:04.066 =====Discovery Log Entry 1====== 00:15:04.066 trtype: tcp 00:15:04.066 adrfam: ipv4 00:15:04.066 subtype: nvme subsystem 00:15:04.066 treq: not required 00:15:04.066 portid: 0 00:15:04.066 trsvcid: 4420 00:15:04.066 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:04.066 traddr: 10.0.0.2 00:15:04.066 eflags: none 00:15:04.066 sectype: none 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:04.066 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:04.326 16:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:05.708 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.616 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:07.876 /dev/nvme0n2 ]] 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:07.876 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:08.136 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.396 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.396 rmmod nvme_tcp 00:15:08.396 rmmod nvme_fabrics 00:15:08.396 rmmod nvme_keyring 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2648392 ']' 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2648392 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2648392 ']' 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2648392 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.396 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2648392 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2648392' 00:15:08.657 killing process with pid 2648392 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2648392 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2648392 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.657 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.202 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:11.202 00:15:11.202 real 0m15.594s 00:15:11.202 user 0m24.383s 00:15:11.202 sys 0m6.325s 00:15:11.202 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.202 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.202 ************************************ 00:15:11.202 END TEST nvmf_nvme_cli 00:15:11.202 ************************************ 00:15:11.202 16:40:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.203 ************************************ 00:15:11.203 START TEST nvmf_vfio_user 00:15:11.203 ************************************ 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:11.203 * Looking for test storage... 00:15:11.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:11.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.203 --rc genhtml_branch_coverage=1 00:15:11.203 --rc genhtml_function_coverage=1 00:15:11.203 --rc genhtml_legend=1 00:15:11.203 --rc geninfo_all_blocks=1 00:15:11.203 --rc geninfo_unexecuted_blocks=1 00:15:11.203 00:15:11.203 ' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:11.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.203 --rc genhtml_branch_coverage=1 00:15:11.203 --rc genhtml_function_coverage=1 00:15:11.203 --rc genhtml_legend=1 00:15:11.203 --rc geninfo_all_blocks=1 00:15:11.203 --rc geninfo_unexecuted_blocks=1 00:15:11.203 00:15:11.203 ' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:11.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.203 --rc genhtml_branch_coverage=1 00:15:11.203 --rc genhtml_function_coverage=1 00:15:11.203 --rc genhtml_legend=1 00:15:11.203 --rc geninfo_all_blocks=1 00:15:11.203 --rc geninfo_unexecuted_blocks=1 00:15:11.203 00:15:11.203 ' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:11.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.203 --rc genhtml_branch_coverage=1 00:15:11.203 --rc genhtml_function_coverage=1 00:15:11.203 --rc genhtml_legend=1 00:15:11.203 --rc geninfo_all_blocks=1 00:15:11.203 --rc geninfo_unexecuted_blocks=1 00:15:11.203 00:15:11.203 ' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.203 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2649931 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2649931' 00:15:11.204 Process pid: 2649931 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2649931 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2649931 ']' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.204 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:11.204 [2024-10-01 16:40:02.740744] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:11.204 [2024-10-01 16:40:02.740799] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.204 [2024-10-01 16:40:02.817641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.204 [2024-10-01 16:40:02.881844] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.204 [2024-10-01 16:40:02.881882] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.204 [2024-10-01 16:40:02.881890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.204 [2024-10-01 16:40:02.881896] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.204 [2024-10-01 16:40:02.881903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.204 [2024-10-01 16:40:02.882029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.204 [2024-10-01 16:40:02.882173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.204 [2024-10-01 16:40:02.882175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.204 [2024-10-01 16:40:02.882049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.464 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.464 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:11.464 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:12.408 16:40:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:12.668 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:12.668 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:12.668 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.668 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:12.668 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:12.929 Malloc1 00:15:12.929 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:12.929 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:13.189 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:13.450 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.450 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:13.450 16:40:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:13.450 Malloc2 00:15:13.450 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:13.711 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:13.971 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:14.234 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:14.234 [2024-10-01 16:40:05.780190] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:14.234 [2024-10-01 16:40:05.780232] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2650506 ] 00:15:14.234 [2024-10-01 16:40:05.812757] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:14.234 [2024-10-01 16:40:05.815038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.234 [2024-10-01 16:40:05.815059] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f92339d7000 00:15:14.234 [2024-10-01 16:40:05.816035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.234 [2024-10-01 16:40:05.817037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.234 [2024-10-01 16:40:05.818046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.234 [2024-10-01 16:40:05.819053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.234 [2024-10-01 16:40:05.820056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.235 [2024-10-01 16:40:05.821062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.235 [2024-10-01 16:40:05.822072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.235 [2024-10-01 16:40:05.823080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.235 [2024-10-01 16:40:05.824077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.235 [2024-10-01 16:40:05.824087] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f92339cc000 00:15:14.235 [2024-10-01 16:40:05.825315] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.235 [2024-10-01 16:40:05.844442] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:14.235 [2024-10-01 16:40:05.844465] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:14.235 [2024-10-01 16:40:05.847205] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:14.235 [2024-10-01 16:40:05.847251] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:14.235 [2024-10-01 16:40:05.847333] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:14.235 [2024-10-01 16:40:05.847352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:14.235 [2024-10-01 16:40:05.847358] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:14.235 [2024-10-01 16:40:05.848212] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:14.235 [2024-10-01 16:40:05.848221] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:14.235 [2024-10-01 16:40:05.848232] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:14.235 [2024-10-01 16:40:05.849219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:14.235 [2024-10-01 16:40:05.849228] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:14.235 [2024-10-01 16:40:05.849235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.850220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:14.235 [2024-10-01 16:40:05.850228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.851226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:14.235 [2024-10-01 16:40:05.851234] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:14.235 [2024-10-01 16:40:05.851239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.851245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.851351] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:14.235 [2024-10-01 16:40:05.851355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.851360] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:14.235 [2024-10-01 16:40:05.852240] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:14.235 [2024-10-01 16:40:05.853245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:14.235 [2024-10-01 16:40:05.854252] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:14.235 [2024-10-01 16:40:05.855252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.235 [2024-10-01 16:40:05.855318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:14.235 [2024-10-01 16:40:05.856260] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:14.235 [2024-10-01 16:40:05.856268] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:14.235 [2024-10-01 16:40:05.856273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856293] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:14.235 [2024-10-01 16:40:05.856301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856315] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.235 [2024-10-01 16:40:05.856322] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.235 [2024-10-01 16:40:05.856326] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.235 [2024-10-01 16:40:05.856341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.235 [2024-10-01 16:40:05.856382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:14.235 [2024-10-01 16:40:05.856391] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:14.235 [2024-10-01 16:40:05.856396] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:14.235 [2024-10-01 16:40:05.856400] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:14.235 [2024-10-01 16:40:05.856404] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:14.235 [2024-10-01 16:40:05.856410] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:14.235 [2024-10-01 16:40:05.856416] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:14.235 [2024-10-01 16:40:05.856421] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856429] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:14.235 [2024-10-01 16:40:05.856449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:14.235 [2024-10-01 16:40:05.856461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.235 [2024-10-01 16:40:05.856470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.235 [2024-10-01 16:40:05.856478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.235 [2024-10-01 16:40:05.856485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.235 [2024-10-01 16:40:05.856490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856499] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:14.235 [2024-10-01 16:40:05.856517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:14.235 [2024-10-01 16:40:05.856526] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:14.235 [2024-10-01 16:40:05.856531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.235 [2024-10-01 16:40:05.856567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:14.235 [2024-10-01 16:40:05.856624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856639] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:14.235 [2024-10-01 16:40:05.856644] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:14.235 [2024-10-01 16:40:05.856647] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.235 [2024-10-01 16:40:05.856653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:14.235 [2024-10-01 16:40:05.856668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:14.235 [2024-10-01 16:40:05.856677] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:14.235 [2024-10-01 16:40:05.856685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:14.235 [2024-10-01 16:40:05.856699] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.235 [2024-10-01 16:40:05.856704] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.235 [2024-10-01 16:40:05.856707] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.236 [2024-10-01 16:40:05.856715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856752] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856758] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.236 [2024-10-01 16:40:05.856762] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.236 [2024-10-01 16:40:05.856766] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.236 [2024-10-01 16:40:05.856772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856827] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:14.236 [2024-10-01 16:40:05.856831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:14.236 [2024-10-01 16:40:05.856836] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:14.236 [2024-10-01 16:40:05.856854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.856939] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:14.236 [2024-10-01 16:40:05.856943] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:14.236 [2024-10-01 16:40:05.856947] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:14.236 [2024-10-01 16:40:05.856950] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:14.236 [2024-10-01 16:40:05.856954] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:14.236 [2024-10-01 16:40:05.856960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:14.236 [2024-10-01 16:40:05.856967] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:14.236 [2024-10-01 16:40:05.856978] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:14.236 [2024-10-01 16:40:05.856981] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.236 [2024-10-01 16:40:05.856987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.856994] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:14.236 [2024-10-01 16:40:05.856998] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.236 [2024-10-01 16:40:05.857003] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.236 [2024-10-01 16:40:05.857009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.857016] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:14.236 [2024-10-01 16:40:05.857020] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:14.236 [2024-10-01 16:40:05.857024] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:14.236 [2024-10-01 16:40:05.857029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:14.236 [2024-10-01 16:40:05.857036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.857047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.857057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:14.236 [2024-10-01 16:40:05.857064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:14.236 ===================================================== 00:15:14.236 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.236 ===================================================== 00:15:14.236 Controller Capabilities/Features 00:15:14.236 ================================ 00:15:14.236 Vendor ID: 4e58 00:15:14.236 Subsystem Vendor ID: 4e58 00:15:14.236 Serial Number: SPDK1 00:15:14.236 Model Number: SPDK bdev Controller 00:15:14.236 Firmware Version: 25.01 00:15:14.236 Recommended Arb Burst: 6 00:15:14.236 IEEE OUI Identifier: 8d 6b 50 00:15:14.236 Multi-path I/O 00:15:14.236 May have multiple subsystem ports: Yes 00:15:14.236 May have multiple controllers: Yes 00:15:14.236 Associated with SR-IOV VF: No 00:15:14.236 Max Data Transfer Size: 131072 00:15:14.236 Max Number of Namespaces: 32 00:15:14.236 Max Number of I/O Queues: 127 00:15:14.236 NVMe Specification Version (VS): 1.3 00:15:14.236 NVMe Specification Version (Identify): 1.3 00:15:14.236 Maximum Queue Entries: 256 00:15:14.236 Contiguous Queues Required: Yes 00:15:14.236 Arbitration Mechanisms Supported 00:15:14.236 Weighted Round Robin: Not Supported 00:15:14.236 Vendor Specific: Not Supported 00:15:14.236 Reset Timeout: 15000 ms 00:15:14.236 Doorbell Stride: 4 bytes 00:15:14.236 NVM Subsystem Reset: Not Supported 00:15:14.236 Command Sets Supported 00:15:14.236 NVM Command Set: Supported 00:15:14.236 Boot Partition: Not Supported 00:15:14.236 Memory Page Size Minimum: 4096 bytes 00:15:14.236 Memory Page Size Maximum: 4096 bytes 00:15:14.236 Persistent Memory Region: Not Supported 00:15:14.236 Optional Asynchronous Events Supported 00:15:14.236 Namespace Attribute Notices: Supported 00:15:14.236 Firmware Activation Notices: Not Supported 00:15:14.236 ANA Change Notices: Not Supported 00:15:14.236 PLE Aggregate Log Change Notices: Not Supported 00:15:14.236 LBA Status Info Alert Notices: Not Supported 00:15:14.236 EGE Aggregate Log Change Notices: Not Supported 00:15:14.236 Normal NVM Subsystem Shutdown event: Not Supported 00:15:14.236 Zone Descriptor Change Notices: Not Supported 00:15:14.236 Discovery Log Change Notices: Not Supported 00:15:14.236 Controller Attributes 00:15:14.236 128-bit Host Identifier: Supported 00:15:14.236 Non-Operational Permissive Mode: Not Supported 00:15:14.236 NVM Sets: Not Supported 00:15:14.236 Read Recovery Levels: Not Supported 00:15:14.236 Endurance Groups: Not Supported 00:15:14.236 Predictable Latency Mode: Not Supported 00:15:14.236 Traffic Based Keep ALive: Not Supported 00:15:14.236 Namespace Granularity: Not Supported 00:15:14.236 SQ Associations: Not Supported 00:15:14.236 UUID List: Not Supported 00:15:14.236 Multi-Domain Subsystem: Not Supported 00:15:14.236 Fixed Capacity Management: Not Supported 00:15:14.236 Variable Capacity Management: Not Supported 00:15:14.236 Delete Endurance Group: Not Supported 00:15:14.236 Delete NVM Set: Not Supported 00:15:14.237 Extended LBA Formats Supported: Not Supported 00:15:14.237 Flexible Data Placement Supported: Not Supported 00:15:14.237 00:15:14.237 Controller Memory Buffer Support 00:15:14.237 ================================ 00:15:14.237 Supported: No 00:15:14.237 00:15:14.237 Persistent Memory Region Support 00:15:14.237 ================================ 00:15:14.237 Supported: No 00:15:14.237 00:15:14.237 Admin Command Set Attributes 00:15:14.237 ============================ 00:15:14.237 Security Send/Receive: Not Supported 00:15:14.237 Format NVM: Not Supported 00:15:14.237 Firmware Activate/Download: Not Supported 00:15:14.237 Namespace Management: Not Supported 00:15:14.237 Device Self-Test: Not Supported 00:15:14.237 Directives: Not Supported 00:15:14.237 NVMe-MI: Not Supported 00:15:14.237 Virtualization Management: Not Supported 00:15:14.237 Doorbell Buffer Config: Not Supported 00:15:14.237 Get LBA Status Capability: Not Supported 00:15:14.237 Command & Feature Lockdown Capability: Not Supported 00:15:14.237 Abort Command Limit: 4 00:15:14.237 Async Event Request Limit: 4 00:15:14.237 Number of Firmware Slots: N/A 00:15:14.237 Firmware Slot 1 Read-Only: N/A 00:15:14.237 Firmware Activation Without Reset: N/A 00:15:14.237 Multiple Update Detection Support: N/A 00:15:14.237 Firmware Update Granularity: No Information Provided 00:15:14.237 Per-Namespace SMART Log: No 00:15:14.237 Asymmetric Namespace Access Log Page: Not Supported 00:15:14.237 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:14.237 Command Effects Log Page: Supported 00:15:14.237 Get Log Page Extended Data: Supported 00:15:14.237 Telemetry Log Pages: Not Supported 00:15:14.237 Persistent Event Log Pages: Not Supported 00:15:14.237 Supported Log Pages Log Page: May Support 00:15:14.237 Commands Supported & Effects Log Page: Not Supported 00:15:14.237 Feature Identifiers & Effects Log Page:May Support 00:15:14.237 NVMe-MI Commands & Effects Log Page: May Support 00:15:14.237 Data Area 4 for Telemetry Log: Not Supported 00:15:14.237 Error Log Page Entries Supported: 128 00:15:14.237 Keep Alive: Supported 00:15:14.237 Keep Alive Granularity: 10000 ms 00:15:14.237 00:15:14.237 NVM Command Set Attributes 00:15:14.237 ========================== 00:15:14.237 Submission Queue Entry Size 00:15:14.237 Max: 64 00:15:14.237 Min: 64 00:15:14.237 Completion Queue Entry Size 00:15:14.237 Max: 16 00:15:14.237 Min: 16 00:15:14.237 Number of Namespaces: 32 00:15:14.237 Compare Command: Supported 00:15:14.237 Write Uncorrectable Command: Not Supported 00:15:14.237 Dataset Management Command: Supported 00:15:14.237 Write Zeroes Command: Supported 00:15:14.237 Set Features Save Field: Not Supported 00:15:14.237 Reservations: Not Supported 00:15:14.237 Timestamp: Not Supported 00:15:14.237 Copy: Supported 00:15:14.237 Volatile Write Cache: Present 00:15:14.237 Atomic Write Unit (Normal): 1 00:15:14.237 Atomic Write Unit (PFail): 1 00:15:14.237 Atomic Compare & Write Unit: 1 00:15:14.237 Fused Compare & Write: Supported 00:15:14.237 Scatter-Gather List 00:15:14.237 SGL Command Set: Supported (Dword aligned) 00:15:14.237 SGL Keyed: Not Supported 00:15:14.237 SGL Bit Bucket Descriptor: Not Supported 00:15:14.237 SGL Metadata Pointer: Not Supported 00:15:14.237 Oversized SGL: Not Supported 00:15:14.237 SGL Metadata Address: Not Supported 00:15:14.237 SGL Offset: Not Supported 00:15:14.237 Transport SGL Data Block: Not Supported 00:15:14.237 Replay Protected Memory Block: Not Supported 00:15:14.237 00:15:14.237 Firmware Slot Information 00:15:14.237 ========================= 00:15:14.237 Active slot: 1 00:15:14.237 Slot 1 Firmware Revision: 25.01 00:15:14.237 00:15:14.237 00:15:14.237 Commands Supported and Effects 00:15:14.237 ============================== 00:15:14.237 Admin Commands 00:15:14.237 -------------- 00:15:14.237 Get Log Page (02h): Supported 00:15:14.237 Identify (06h): Supported 00:15:14.237 Abort (08h): Supported 00:15:14.237 Set Features (09h): Supported 00:15:14.237 Get Features (0Ah): Supported 00:15:14.237 Asynchronous Event Request (0Ch): Supported 00:15:14.237 Keep Alive (18h): Supported 00:15:14.237 I/O Commands 00:15:14.237 ------------ 00:15:14.237 Flush (00h): Supported LBA-Change 00:15:14.237 Write (01h): Supported LBA-Change 00:15:14.237 Read (02h): Supported 00:15:14.237 Compare (05h): Supported 00:15:14.237 Write Zeroes (08h): Supported LBA-Change 00:15:14.237 Dataset Management (09h): Supported LBA-Change 00:15:14.237 Copy (19h): Supported LBA-Change 00:15:14.237 00:15:14.237 Error Log 00:15:14.237 ========= 00:15:14.237 00:15:14.237 Arbitration 00:15:14.237 =========== 00:15:14.237 Arbitration Burst: 1 00:15:14.237 00:15:14.237 Power Management 00:15:14.237 ================ 00:15:14.237 Number of Power States: 1 00:15:14.237 Current Power State: Power State #0 00:15:14.237 Power State #0: 00:15:14.237 Max Power: 0.00 W 00:15:14.237 Non-Operational State: Operational 00:15:14.237 Entry Latency: Not Reported 00:15:14.237 Exit Latency: Not Reported 00:15:14.237 Relative Read Throughput: 0 00:15:14.237 Relative Read Latency: 0 00:15:14.237 Relative Write Throughput: 0 00:15:14.237 Relative Write Latency: 0 00:15:14.237 Idle Power: Not Reported 00:15:14.237 Active Power: Not Reported 00:15:14.237 Non-Operational Permissive Mode: Not Supported 00:15:14.237 00:15:14.237 Health Information 00:15:14.237 ================== 00:15:14.237 Critical Warnings: 00:15:14.237 Available Spare Space: OK 00:15:14.237 Temperature: OK 00:15:14.237 Device Reliability: OK 00:15:14.237 Read Only: No 00:15:14.237 Volatile Memory Backup: OK 00:15:14.237 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:14.237 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:14.237 Available Spare: 0% 00:15:14.237 Available Sp[2024-10-01 16:40:05.857154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:14.237 [2024-10-01 16:40:05.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:14.237 [2024-10-01 16:40:05.857191] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:14.237 [2024-10-01 16:40:05.857201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.237 [2024-10-01 16:40:05.857207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.237 [2024-10-01 16:40:05.857213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.237 [2024-10-01 16:40:05.857219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.237 [2024-10-01 16:40:05.859978] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:14.237 [2024-10-01 16:40:05.859989] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:14.237 [2024-10-01 16:40:05.860283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.237 [2024-10-01 16:40:05.860323] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:14.237 [2024-10-01 16:40:05.860329] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:14.237 [2024-10-01 16:40:05.861292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:14.237 [2024-10-01 16:40:05.861303] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:14.237 [2024-10-01 16:40:05.861358] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:14.237 [2024-10-01 16:40:05.863318] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.237 are Threshold: 0% 00:15:14.237 Life Percentage Used: 0% 00:15:14.237 Data Units Read: 0 00:15:14.237 Data Units Written: 0 00:15:14.237 Host Read Commands: 0 00:15:14.237 Host Write Commands: 0 00:15:14.237 Controller Busy Time: 0 minutes 00:15:14.237 Power Cycles: 0 00:15:14.237 Power On Hours: 0 hours 00:15:14.237 Unsafe Shutdowns: 0 00:15:14.237 Unrecoverable Media Errors: 0 00:15:14.237 Lifetime Error Log Entries: 0 00:15:14.237 Warning Temperature Time: 0 minutes 00:15:14.237 Critical Temperature Time: 0 minutes 00:15:14.237 00:15:14.237 Number of Queues 00:15:14.237 ================ 00:15:14.237 Number of I/O Submission Queues: 127 00:15:14.237 Number of I/O Completion Queues: 127 00:15:14.237 00:15:14.237 Active Namespaces 00:15:14.237 ================= 00:15:14.237 Namespace ID:1 00:15:14.237 Error Recovery Timeout: Unlimited 00:15:14.237 Command Set Identifier: NVM (00h) 00:15:14.237 Deallocate: Supported 00:15:14.237 Deallocated/Unwritten Error: Not Supported 00:15:14.237 Deallocated Read Value: Unknown 00:15:14.237 Deallocate in Write Zeroes: Not Supported 00:15:14.237 Deallocated Guard Field: 0xFFFF 00:15:14.237 Flush: Supported 00:15:14.237 Reservation: Supported 00:15:14.238 Namespace Sharing Capabilities: Multiple Controllers 00:15:14.238 Size (in LBAs): 131072 (0GiB) 00:15:14.238 Capacity (in LBAs): 131072 (0GiB) 00:15:14.238 Utilization (in LBAs): 131072 (0GiB) 00:15:14.238 NGUID: BF444AEB60DC40AEAF0E4A13DC202B5B 00:15:14.238 UUID: bf444aeb-60dc-40ae-af0e-4a13dc202b5b 00:15:14.238 Thin Provisioning: Not Supported 00:15:14.238 Per-NS Atomic Units: Yes 00:15:14.238 Atomic Boundary Size (Normal): 0 00:15:14.238 Atomic Boundary Size (PFail): 0 00:15:14.238 Atomic Boundary Offset: 0 00:15:14.238 Maximum Single Source Range Length: 65535 00:15:14.238 Maximum Copy Length: 65535 00:15:14.238 Maximum Source Range Count: 1 00:15:14.238 NGUID/EUI64 Never Reused: No 00:15:14.238 Namespace Write Protected: No 00:15:14.238 Number of LBA Formats: 1 00:15:14.238 Current LBA Format: LBA Format #00 00:15:14.238 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:14.238 00:15:14.238 16:40:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.498 [2024-10-01 16:40:06.048596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.780 Initializing NVMe Controllers 00:15:19.780 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:19.781 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:19.781 Initialization complete. Launching workers. 00:15:19.781 ======================================================== 00:15:19.781 Latency(us) 00:15:19.781 Device Information : IOPS MiB/s Average min max 00:15:19.781 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40191.24 157.00 3184.44 876.54 10741.38 00:15:19.781 ======================================================== 00:15:19.781 Total : 40191.24 157.00 3184.44 876.54 10741.38 00:15:19.781 00:15:19.781 [2024-10-01 16:40:11.067087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.781 16:40:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:19.781 [2024-10-01 16:40:11.247981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.063 Initializing NVMe Controllers 00:15:25.063 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.063 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:25.063 Initialization complete. Launching workers. 00:15:25.063 ======================================================== 00:15:25.063 Latency(us) 00:15:25.063 Device Information : IOPS MiB/s Average min max 00:15:25.063 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.64 6975.28 8982.95 00:15:25.063 ======================================================== 00:15:25.063 Total : 16051.20 62.70 7980.64 6975.28 8982.95 00:15:25.063 00:15:25.063 [2024-10-01 16:40:16.281907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.063 16:40:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:25.063 [2024-10-01 16:40:16.475761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:30.342 [2024-10-01 16:40:21.544176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:30.342 Initializing NVMe Controllers 00:15:30.342 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.342 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.342 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:30.342 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:30.342 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:30.342 Initialization complete. Launching workers. 00:15:30.342 Starting thread on core 2 00:15:30.342 Starting thread on core 3 00:15:30.342 Starting thread on core 1 00:15:30.342 16:40:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:30.342 [2024-10-01 16:40:21.797369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.637 [2024-10-01 16:40:24.847598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.637 Initializing NVMe Controllers 00:15:33.637 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.637 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.637 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:33.637 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:33.637 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:33.637 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:33.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:33.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:33.637 Initialization complete. Launching workers. 00:15:33.637 Starting thread on core 1 with urgent priority queue 00:15:33.637 Starting thread on core 2 with urgent priority queue 00:15:33.637 Starting thread on core 3 with urgent priority queue 00:15:33.637 Starting thread on core 0 with urgent priority queue 00:15:33.637 SPDK bdev Controller (SPDK1 ) core 0: 11118.00 IO/s 8.99 secs/100000 ios 00:15:33.637 SPDK bdev Controller (SPDK1 ) core 1: 13349.33 IO/s 7.49 secs/100000 ios 00:15:33.637 SPDK bdev Controller (SPDK1 ) core 2: 13468.00 IO/s 7.43 secs/100000 ios 00:15:33.637 SPDK bdev Controller (SPDK1 ) core 3: 14696.00 IO/s 6.80 secs/100000 ios 00:15:33.637 ======================================================== 00:15:33.637 00:15:33.637 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:33.637 [2024-10-01 16:40:25.104407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.637 Initializing NVMe Controllers 00:15:33.637 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.637 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.637 Namespace ID: 1 size: 0GB 00:15:33.637 Initialization complete. 00:15:33.637 INFO: using host memory buffer for IO 00:15:33.637 Hello world! 00:15:33.637 [2024-10-01 16:40:25.138767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.637 16:40:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:33.897 [2024-10-01 16:40:25.396382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.838 Initializing NVMe Controllers 00:15:34.838 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.838 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:34.838 Initialization complete. Launching workers. 00:15:34.838 submit (in ns) avg, min, max = 8792.8, 3659.2, 6992548.5 00:15:34.838 complete (in ns) avg, min, max = 17434.7, 2179.2, 6992382.3 00:15:34.838 00:15:34.838 Submit histogram 00:15:34.838 ================ 00:15:34.838 Range in us Cumulative Count 00:15:34.838 3.643 - 3.668: 0.5948% ( 112) 00:15:34.838 3.668 - 3.692: 4.0731% ( 655) 00:15:34.838 3.692 - 3.717: 12.2511% ( 1540) 00:15:34.838 3.717 - 3.742: 21.5443% ( 1750) 00:15:34.838 3.742 - 3.766: 34.3423% ( 2410) 00:15:34.838 3.766 - 3.791: 44.7878% ( 1967) 00:15:34.838 3.791 - 3.815: 58.4940% ( 2581) 00:15:34.838 3.815 - 3.840: 76.8786% ( 3462) 00:15:34.838 3.840 - 3.865: 89.7350% ( 2421) 00:15:34.838 3.865 - 3.889: 96.5270% ( 1279) 00:15:34.838 3.889 - 3.914: 98.8370% ( 435) 00:15:34.838 3.914 - 3.938: 99.3043% ( 88) 00:15:34.838 3.938 - 3.963: 99.4530% ( 28) 00:15:34.838 3.963 - 3.988: 99.5008% ( 9) 00:15:34.838 3.988 - 4.012: 99.5274% ( 5) 00:15:34.838 4.012 - 4.037: 99.5327% ( 1) 00:15:34.838 4.037 - 4.062: 99.5380% ( 1) 00:15:34.838 4.111 - 4.135: 99.5433% ( 1) 00:15:34.838 4.258 - 4.283: 99.5486% ( 1) 00:15:34.838 5.046 - 5.071: 99.5539% ( 1) 00:15:34.838 5.194 - 5.218: 99.5592% ( 1) 00:15:34.838 5.711 - 5.735: 99.5645% ( 1) 00:15:34.838 5.883 - 5.908: 99.5699% ( 1) 00:15:34.838 5.982 - 6.006: 99.5752% ( 1) 00:15:34.838 6.006 - 6.031: 99.5805% ( 1) 00:15:34.838 6.203 - 6.228: 99.5858% ( 1) 00:15:34.838 6.597 - 6.646: 99.5911% ( 1) 00:15:34.838 6.745 - 6.794: 99.5964% ( 1) 00:15:34.838 6.942 - 6.991: 99.6070% ( 2) 00:15:34.838 6.991 - 7.040: 99.6123% ( 1) 00:15:34.838 7.089 - 7.138: 99.6177% ( 1) 00:15:34.838 7.138 - 7.188: 99.6283% ( 2) 00:15:34.838 7.188 - 7.237: 99.6336% ( 1) 00:15:34.838 7.335 - 7.385: 99.6442% ( 2) 00:15:34.838 7.385 - 7.434: 99.6495% ( 1) 00:15:34.838 7.434 - 7.483: 99.6548% ( 1) 00:15:34.838 7.483 - 7.532: 99.6601% ( 1) 00:15:34.838 7.532 - 7.582: 99.6654% ( 1) 00:15:34.838 7.582 - 7.631: 99.6708% ( 1) 00:15:34.838 7.631 - 7.680: 99.6814% ( 2) 00:15:34.838 7.680 - 7.729: 99.6920% ( 2) 00:15:34.838 7.778 - 7.828: 99.6973% ( 1) 00:15:34.838 7.828 - 7.877: 99.7079% ( 2) 00:15:34.838 7.926 - 7.975: 99.7132% ( 1) 00:15:34.838 7.975 - 8.025: 99.7239% ( 2) 00:15:34.838 8.123 - 8.172: 99.7292% ( 1) 00:15:34.838 8.172 - 8.222: 99.7398% ( 2) 00:15:34.838 8.222 - 8.271: 99.7504% ( 2) 00:15:34.838 8.271 - 8.320: 99.7717% ( 4) 00:15:34.838 8.320 - 8.369: 99.7823% ( 2) 00:15:34.838 8.418 - 8.468: 99.7982% ( 3) 00:15:34.838 8.517 - 8.566: 99.8035% ( 1) 00:15:34.838 8.615 - 8.665: 99.8088% ( 1) 00:15:34.838 8.665 - 8.714: 99.8141% ( 1) 00:15:34.838 8.714 - 8.763: 99.8194% ( 1) 00:15:34.838 8.763 - 8.812: 99.8248% ( 1) 00:15:34.838 8.862 - 8.911: 99.8354% ( 2) 00:15:34.838 9.009 - 9.058: 99.8460% ( 2) 00:15:34.838 9.108 - 9.157: 99.8513% ( 1) 00:15:34.838 9.157 - 9.206: 99.8566% ( 1) 00:15:34.838 9.206 - 9.255: 99.8619% ( 1) 00:15:34.838 9.354 - 9.403: 99.8672% ( 1) 00:15:34.838 9.945 - 9.994: 99.8726% ( 1) 00:15:34.838 12.308 - 12.357: 99.8779% ( 1) 00:15:34.838 13.095 - 13.194: 99.8832% ( 1) 00:15:34.838 3982.572 - 4007.778: 99.9894% ( 20) 00:15:34.838 6956.898 - 7007.311: 100.0000% ( 2) 00:15:34.838 00:15:34.838 Complete histogram 00:15:34.838 ================== 00:15:34.838 Range in us Cumulative Count 00:15:34.838 2.178 - 2.191: 0.0106% ( 2) 00:15:34.839 2.191 - 2.203: 0.0319% ( 4) 00:15:34.839 2.203 - 2.215: 0.9081% ( 165) 00:15:34.839 2.215 - 2.228: 0.9877% ( 15) 00:15:34.839 2.228 - 2.240: 1.0939% ( 20) 00:15:34.839 2.240 - 2.252: 1.1364% ( 8) 00:15:34.839 2.252 - 2.265: 19.3511% ( 3430) 00:15:34.839 2.265 - 2.277: 54.7236% ( 6661) 00:15:34.839 2.277 - 2.289: 61.7917% ( 1331) 00:15:34.839 2.289 - 2.302: 74.0109% ( 2301) 00:15:34.839 2.302 - [2024-10-01 16:40:26.415931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.839 2.314: 80.4896% ( 1220) 00:15:34.839 2.314 - 2.326: 81.5305% ( 196) 00:15:34.839 2.326 - 2.338: 85.2159% ( 694) 00:15:34.839 2.338 - 2.351: 91.5140% ( 1186) 00:15:34.839 2.351 - 2.363: 95.4809% ( 747) 00:15:34.839 2.363 - 2.375: 97.7218% ( 422) 00:15:34.839 2.375 - 2.388: 98.7999% ( 203) 00:15:34.839 2.388 - 2.400: 99.2034% ( 76) 00:15:34.839 2.400 - 2.412: 99.2937% ( 17) 00:15:34.839 2.412 - 2.425: 99.3468% ( 10) 00:15:34.839 2.449 - 2.462: 99.3521% ( 1) 00:15:34.839 4.086 - 4.111: 99.3574% ( 1) 00:15:34.839 4.209 - 4.234: 99.3628% ( 1) 00:15:34.839 4.431 - 4.455: 99.3681% ( 1) 00:15:34.839 4.751 - 4.775: 99.3734% ( 1) 00:15:34.839 5.514 - 5.538: 99.3787% ( 1) 00:15:34.839 5.637 - 5.662: 99.3840% ( 1) 00:15:34.839 5.711 - 5.735: 99.3893% ( 1) 00:15:34.839 5.760 - 5.785: 99.3946% ( 1) 00:15:34.839 5.785 - 5.809: 99.3999% ( 1) 00:15:34.839 5.883 - 5.908: 99.4052% ( 1) 00:15:34.839 5.957 - 5.982: 99.4105% ( 1) 00:15:34.839 6.129 - 6.154: 99.4159% ( 1) 00:15:34.839 6.203 - 6.228: 99.4212% ( 1) 00:15:34.839 6.252 - 6.277: 99.4265% ( 1) 00:15:34.839 6.302 - 6.351: 99.4371% ( 2) 00:15:34.839 6.351 - 6.400: 99.4424% ( 1) 00:15:34.839 6.400 - 6.449: 99.4477% ( 1) 00:15:34.839 6.449 - 6.498: 99.4530% ( 1) 00:15:34.839 6.498 - 6.548: 99.4583% ( 1) 00:15:34.839 6.597 - 6.646: 99.4637% ( 1) 00:15:34.839 6.695 - 6.745: 99.4743% ( 2) 00:15:34.839 6.745 - 6.794: 99.4849% ( 2) 00:15:34.839 6.794 - 6.843: 99.4902% ( 1) 00:15:34.839 6.991 - 7.040: 99.4955% ( 1) 00:15:34.839 7.040 - 7.089: 99.5008% ( 1) 00:15:34.839 7.089 - 7.138: 99.5061% ( 1) 00:15:34.839 7.138 - 7.188: 99.5114% ( 1) 00:15:34.839 7.188 - 7.237: 99.5221% ( 2) 00:15:34.839 7.237 - 7.286: 99.5274% ( 1) 00:15:34.839 7.335 - 7.385: 99.5327% ( 1) 00:15:34.839 7.434 - 7.483: 99.5433% ( 2) 00:15:34.839 7.483 - 7.532: 99.5486% ( 1) 00:15:34.839 7.582 - 7.631: 99.5539% ( 1) 00:15:34.839 7.729 - 7.778: 99.5592% ( 1) 00:15:34.839 7.778 - 7.828: 99.5645% ( 1) 00:15:34.839 7.877 - 7.926: 99.5699% ( 1) 00:15:34.839 8.074 - 8.123: 99.5752% ( 1) 00:15:34.839 8.418 - 8.468: 99.5805% ( 1) 00:15:34.839 8.615 - 8.665: 99.5858% ( 1) 00:15:34.839 8.763 - 8.812: 99.5911% ( 1) 00:15:34.839 9.058 - 9.108: 99.5964% ( 1) 00:15:34.839 14.080 - 14.178: 99.6017% ( 1) 00:15:34.839 16.640 - 16.738: 99.6070% ( 1) 00:15:34.839 40.960 - 41.157: 99.6123% ( 1) 00:15:34.839 146.511 - 147.298: 99.6177% ( 1) 00:15:34.839 1033.452 - 1039.754: 99.6230% ( 1) 00:15:34.839 1083.865 - 1090.166: 99.6283% ( 1) 00:15:34.839 3982.572 - 4007.778: 99.9947% ( 69) 00:15:34.839 6956.898 - 7007.311: 100.0000% ( 1) 00:15:34.839 00:15:34.839 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:34.839 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:34.839 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:34.839 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:34.839 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.099 [ 00:15:35.099 { 00:15:35.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.099 "subtype": "Discovery", 00:15:35.099 "listen_addresses": [], 00:15:35.099 "allow_any_host": true, 00:15:35.099 "hosts": [] 00:15:35.099 }, 00:15:35.099 { 00:15:35.099 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.099 "subtype": "NVMe", 00:15:35.099 "listen_addresses": [ 00:15:35.099 { 00:15:35.099 "trtype": "VFIOUSER", 00:15:35.099 "adrfam": "IPv4", 00:15:35.099 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.099 "trsvcid": "0" 00:15:35.099 } 00:15:35.099 ], 00:15:35.099 "allow_any_host": true, 00:15:35.099 "hosts": [], 00:15:35.099 "serial_number": "SPDK1", 00:15:35.099 "model_number": "SPDK bdev Controller", 00:15:35.099 "max_namespaces": 32, 00:15:35.099 "min_cntlid": 1, 00:15:35.099 "max_cntlid": 65519, 00:15:35.099 "namespaces": [ 00:15:35.099 { 00:15:35.099 "nsid": 1, 00:15:35.099 "bdev_name": "Malloc1", 00:15:35.099 "name": "Malloc1", 00:15:35.099 "nguid": "BF444AEB60DC40AEAF0E4A13DC202B5B", 00:15:35.099 "uuid": "bf444aeb-60dc-40ae-af0e-4a13dc202b5b" 00:15:35.099 } 00:15:35.099 ] 00:15:35.099 }, 00:15:35.099 { 00:15:35.099 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.099 "subtype": "NVMe", 00:15:35.099 "listen_addresses": [ 00:15:35.099 { 00:15:35.099 "trtype": "VFIOUSER", 00:15:35.099 "adrfam": "IPv4", 00:15:35.099 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.099 "trsvcid": "0" 00:15:35.099 } 00:15:35.099 ], 00:15:35.099 "allow_any_host": true, 00:15:35.099 "hosts": [], 00:15:35.099 "serial_number": "SPDK2", 00:15:35.099 "model_number": "SPDK bdev Controller", 00:15:35.099 "max_namespaces": 32, 00:15:35.099 "min_cntlid": 1, 00:15:35.099 "max_cntlid": 65519, 00:15:35.099 "namespaces": [ 00:15:35.099 { 00:15:35.099 "nsid": 1, 00:15:35.099 "bdev_name": "Malloc2", 00:15:35.099 "name": "Malloc2", 00:15:35.099 "nguid": "526D91CDB3D849C69548227EEE0DD5E4", 00:15:35.099 "uuid": "526d91cd-b3d8-49c6-9548-227eee0dd5e4" 00:15:35.099 } 00:15:35.099 ] 00:15:35.099 } 00:15:35.099 ] 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2653921 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:35.099 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:15:35.100 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:35.359 [2024-10-01 16:40:26.798318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.359 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.359 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.359 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:35.359 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.359 16:40:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:35.619 Malloc3 00:15:35.619 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:35.619 [2024-10-01 16:40:27.263536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.619 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.879 Asynchronous Event Request test 00:15:35.879 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.879 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:35.879 Registering asynchronous event callbacks... 00:15:35.879 Starting namespace attribute notice tests for all controllers... 00:15:35.879 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:35.879 aer_cb - Changed Namespace 00:15:35.879 Cleaning up... 00:15:35.879 [ 00:15:35.879 { 00:15:35.879 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.879 "subtype": "Discovery", 00:15:35.879 "listen_addresses": [], 00:15:35.879 "allow_any_host": true, 00:15:35.879 "hosts": [] 00:15:35.879 }, 00:15:35.879 { 00:15:35.879 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.879 "subtype": "NVMe", 00:15:35.879 "listen_addresses": [ 00:15:35.879 { 00:15:35.879 "trtype": "VFIOUSER", 00:15:35.879 "adrfam": "IPv4", 00:15:35.879 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.879 "trsvcid": "0" 00:15:35.879 } 00:15:35.879 ], 00:15:35.879 "allow_any_host": true, 00:15:35.879 "hosts": [], 00:15:35.879 "serial_number": "SPDK1", 00:15:35.879 "model_number": "SPDK bdev Controller", 00:15:35.879 "max_namespaces": 32, 00:15:35.879 "min_cntlid": 1, 00:15:35.879 "max_cntlid": 65519, 00:15:35.879 "namespaces": [ 00:15:35.879 { 00:15:35.879 "nsid": 1, 00:15:35.879 "bdev_name": "Malloc1", 00:15:35.879 "name": "Malloc1", 00:15:35.879 "nguid": "BF444AEB60DC40AEAF0E4A13DC202B5B", 00:15:35.879 "uuid": "bf444aeb-60dc-40ae-af0e-4a13dc202b5b" 00:15:35.879 }, 00:15:35.879 { 00:15:35.879 "nsid": 2, 00:15:35.879 "bdev_name": "Malloc3", 00:15:35.879 "name": "Malloc3", 00:15:35.879 "nguid": "213FAD7BF112440AB706703DAED6596F", 00:15:35.879 "uuid": "213fad7b-f112-440a-b706-703daed6596f" 00:15:35.879 } 00:15:35.879 ] 00:15:35.879 }, 00:15:35.879 { 00:15:35.879 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.879 "subtype": "NVMe", 00:15:35.879 "listen_addresses": [ 00:15:35.879 { 00:15:35.879 "trtype": "VFIOUSER", 00:15:35.879 "adrfam": "IPv4", 00:15:35.879 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.879 "trsvcid": "0" 00:15:35.879 } 00:15:35.879 ], 00:15:35.879 "allow_any_host": true, 00:15:35.879 "hosts": [], 00:15:35.879 "serial_number": "SPDK2", 00:15:35.879 "model_number": "SPDK bdev Controller", 00:15:35.879 "max_namespaces": 32, 00:15:35.879 "min_cntlid": 1, 00:15:35.879 "max_cntlid": 65519, 00:15:35.879 "namespaces": [ 00:15:35.879 { 00:15:35.880 "nsid": 1, 00:15:35.880 "bdev_name": "Malloc2", 00:15:35.880 "name": "Malloc2", 00:15:35.880 "nguid": "526D91CDB3D849C69548227EEE0DD5E4", 00:15:35.880 "uuid": "526d91cd-b3d8-49c6-9548-227eee0dd5e4" 00:15:35.880 } 00:15:35.880 ] 00:15:35.880 } 00:15:35.880 ] 00:15:35.880 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2653921 00:15:35.880 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.880 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:35.880 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:35.880 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:35.880 [2024-10-01 16:40:27.503514] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:35.880 [2024-10-01 16:40:27.503557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653959 ] 00:15:35.880 [2024-10-01 16:40:27.533655] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:35.880 [2024-10-01 16:40:27.542188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.880 [2024-10-01 16:40:27.542211] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9d7b507000 00:15:35.880 [2024-10-01 16:40:27.543189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.544195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.545198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.546208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.547213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.548218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.549220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.550225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.880 [2024-10-01 16:40:27.551231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.880 [2024-10-01 16:40:27.551241] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9d7b4fc000 00:15:35.880 [2024-10-01 16:40:27.552467] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.142 [2024-10-01 16:40:27.568025] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:36.142 [2024-10-01 16:40:27.568050] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:36.142 [2024-10-01 16:40:27.573130] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.142 [2024-10-01 16:40:27.573175] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:36.142 [2024-10-01 16:40:27.573253] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:36.142 [2024-10-01 16:40:27.573268] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:36.142 [2024-10-01 16:40:27.573274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:36.142 [2024-10-01 16:40:27.574133] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:36.142 [2024-10-01 16:40:27.574142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:36.142 [2024-10-01 16:40:27.574149] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:36.142 [2024-10-01 16:40:27.575138] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:36.142 [2024-10-01 16:40:27.575147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:36.142 [2024-10-01 16:40:27.575154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.576148] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:36.142 [2024-10-01 16:40:27.576157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.577156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:36.142 [2024-10-01 16:40:27.577165] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:36.142 [2024-10-01 16:40:27.577170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.577177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.577282] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:36.142 [2024-10-01 16:40:27.577287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.577291] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:36.142 [2024-10-01 16:40:27.578163] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:36.142 [2024-10-01 16:40:27.579167] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:36.142 [2024-10-01 16:40:27.580175] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.142 [2024-10-01 16:40:27.581182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.142 [2024-10-01 16:40:27.581221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:36.142 [2024-10-01 16:40:27.582188] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:36.142 [2024-10-01 16:40:27.582197] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:36.142 [2024-10-01 16:40:27.582202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.582222] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:36.142 [2024-10-01 16:40:27.582229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.582241] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.142 [2024-10-01 16:40:27.582246] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.142 [2024-10-01 16:40:27.582249] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.142 [2024-10-01 16:40:27.582261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.142 [2024-10-01 16:40:27.586979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:36.142 [2024-10-01 16:40:27.586993] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:36.142 [2024-10-01 16:40:27.586998] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:36.142 [2024-10-01 16:40:27.587002] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:36.142 [2024-10-01 16:40:27.587007] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:36.142 [2024-10-01 16:40:27.587012] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:36.142 [2024-10-01 16:40:27.587016] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:36.142 [2024-10-01 16:40:27.587021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.587028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.587038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:36.142 [2024-10-01 16:40:27.594976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:36.142 [2024-10-01 16:40:27.594989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.142 [2024-10-01 16:40:27.594998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.142 [2024-10-01 16:40:27.595006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.142 [2024-10-01 16:40:27.595014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.142 [2024-10-01 16:40:27.595018] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.595027] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.595036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:36.142 [2024-10-01 16:40:27.602977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:36.142 [2024-10-01 16:40:27.602985] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:36.142 [2024-10-01 16:40:27.602990] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:36.142 [2024-10-01 16:40:27.602997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.603005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.603013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.610975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.611035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.611045] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.611053] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:36.143 [2024-10-01 16:40:27.611057] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:36.143 [2024-10-01 16:40:27.611061] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.611067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.618978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.618989] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:36.143 [2024-10-01 16:40:27.618998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.619006] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.619013] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.143 [2024-10-01 16:40:27.619017] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.143 [2024-10-01 16:40:27.619021] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.619027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.626977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.626990] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.626998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.627005] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.143 [2024-10-01 16:40:27.627010] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.143 [2024-10-01 16:40:27.627013] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.627019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.634978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.634988] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.634994] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635027] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:36.143 [2024-10-01 16:40:27.635031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:36.143 [2024-10-01 16:40:27.635036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:36.143 [2024-10-01 16:40:27.635052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.642976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.642989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.650990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.658977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.658990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.666976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.666991] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:36.143 [2024-10-01 16:40:27.666996] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:36.143 [2024-10-01 16:40:27.667000] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:36.143 [2024-10-01 16:40:27.667003] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:36.143 [2024-10-01 16:40:27.667007] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:36.143 [2024-10-01 16:40:27.667013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:36.143 [2024-10-01 16:40:27.667021] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:36.143 [2024-10-01 16:40:27.667025] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:36.143 [2024-10-01 16:40:27.667029] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.667034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.667041] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:36.143 [2024-10-01 16:40:27.667046] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.143 [2024-10-01 16:40:27.667049] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.667055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.667062] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:36.143 [2024-10-01 16:40:27.667066] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:36.143 [2024-10-01 16:40:27.667069] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.143 [2024-10-01 16:40:27.667077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:36.143 [2024-10-01 16:40:27.674978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.674993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.675003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:36.143 [2024-10-01 16:40:27.675009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:36.143 ===================================================== 00:15:36.143 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.143 ===================================================== 00:15:36.143 Controller Capabilities/Features 00:15:36.143 ================================ 00:15:36.143 Vendor ID: 4e58 00:15:36.143 Subsystem Vendor ID: 4e58 00:15:36.143 Serial Number: SPDK2 00:15:36.143 Model Number: SPDK bdev Controller 00:15:36.143 Firmware Version: 25.01 00:15:36.143 Recommended Arb Burst: 6 00:15:36.143 IEEE OUI Identifier: 8d 6b 50 00:15:36.143 Multi-path I/O 00:15:36.143 May have multiple subsystem ports: Yes 00:15:36.143 May have multiple controllers: Yes 00:15:36.143 Associated with SR-IOV VF: No 00:15:36.143 Max Data Transfer Size: 131072 00:15:36.143 Max Number of Namespaces: 32 00:15:36.143 Max Number of I/O Queues: 127 00:15:36.143 NVMe Specification Version (VS): 1.3 00:15:36.143 NVMe Specification Version (Identify): 1.3 00:15:36.143 Maximum Queue Entries: 256 00:15:36.143 Contiguous Queues Required: Yes 00:15:36.143 Arbitration Mechanisms Supported 00:15:36.143 Weighted Round Robin: Not Supported 00:15:36.143 Vendor Specific: Not Supported 00:15:36.143 Reset Timeout: 15000 ms 00:15:36.143 Doorbell Stride: 4 bytes 00:15:36.143 NVM Subsystem Reset: Not Supported 00:15:36.143 Command Sets Supported 00:15:36.143 NVM Command Set: Supported 00:15:36.143 Boot Partition: Not Supported 00:15:36.143 Memory Page Size Minimum: 4096 bytes 00:15:36.143 Memory Page Size Maximum: 4096 bytes 00:15:36.143 Persistent Memory Region: Not Supported 00:15:36.143 Optional Asynchronous Events Supported 00:15:36.143 Namespace Attribute Notices: Supported 00:15:36.143 Firmware Activation Notices: Not Supported 00:15:36.143 ANA Change Notices: Not Supported 00:15:36.143 PLE Aggregate Log Change Notices: Not Supported 00:15:36.143 LBA Status Info Alert Notices: Not Supported 00:15:36.143 EGE Aggregate Log Change Notices: Not Supported 00:15:36.143 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.144 Zone Descriptor Change Notices: Not Supported 00:15:36.144 Discovery Log Change Notices: Not Supported 00:15:36.144 Controller Attributes 00:15:36.144 128-bit Host Identifier: Supported 00:15:36.144 Non-Operational Permissive Mode: Not Supported 00:15:36.144 NVM Sets: Not Supported 00:15:36.144 Read Recovery Levels: Not Supported 00:15:36.144 Endurance Groups: Not Supported 00:15:36.144 Predictable Latency Mode: Not Supported 00:15:36.144 Traffic Based Keep ALive: Not Supported 00:15:36.144 Namespace Granularity: Not Supported 00:15:36.144 SQ Associations: Not Supported 00:15:36.144 UUID List: Not Supported 00:15:36.144 Multi-Domain Subsystem: Not Supported 00:15:36.144 Fixed Capacity Management: Not Supported 00:15:36.144 Variable Capacity Management: Not Supported 00:15:36.144 Delete Endurance Group: Not Supported 00:15:36.144 Delete NVM Set: Not Supported 00:15:36.144 Extended LBA Formats Supported: Not Supported 00:15:36.144 Flexible Data Placement Supported: Not Supported 00:15:36.144 00:15:36.144 Controller Memory Buffer Support 00:15:36.144 ================================ 00:15:36.144 Supported: No 00:15:36.144 00:15:36.144 Persistent Memory Region Support 00:15:36.144 ================================ 00:15:36.144 Supported: No 00:15:36.144 00:15:36.144 Admin Command Set Attributes 00:15:36.144 ============================ 00:15:36.144 Security Send/Receive: Not Supported 00:15:36.144 Format NVM: Not Supported 00:15:36.144 Firmware Activate/Download: Not Supported 00:15:36.144 Namespace Management: Not Supported 00:15:36.144 Device Self-Test: Not Supported 00:15:36.144 Directives: Not Supported 00:15:36.144 NVMe-MI: Not Supported 00:15:36.144 Virtualization Management: Not Supported 00:15:36.144 Doorbell Buffer Config: Not Supported 00:15:36.144 Get LBA Status Capability: Not Supported 00:15:36.144 Command & Feature Lockdown Capability: Not Supported 00:15:36.144 Abort Command Limit: 4 00:15:36.144 Async Event Request Limit: 4 00:15:36.144 Number of Firmware Slots: N/A 00:15:36.144 Firmware Slot 1 Read-Only: N/A 00:15:36.144 Firmware Activation Without Reset: N/A 00:15:36.144 Multiple Update Detection Support: N/A 00:15:36.144 Firmware Update Granularity: No Information Provided 00:15:36.144 Per-Namespace SMART Log: No 00:15:36.144 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.144 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:36.144 Command Effects Log Page: Supported 00:15:36.144 Get Log Page Extended Data: Supported 00:15:36.144 Telemetry Log Pages: Not Supported 00:15:36.144 Persistent Event Log Pages: Not Supported 00:15:36.144 Supported Log Pages Log Page: May Support 00:15:36.144 Commands Supported & Effects Log Page: Not Supported 00:15:36.144 Feature Identifiers & Effects Log Page:May Support 00:15:36.144 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.144 Data Area 4 for Telemetry Log: Not Supported 00:15:36.144 Error Log Page Entries Supported: 128 00:15:36.144 Keep Alive: Supported 00:15:36.144 Keep Alive Granularity: 10000 ms 00:15:36.144 00:15:36.144 NVM Command Set Attributes 00:15:36.144 ========================== 00:15:36.144 Submission Queue Entry Size 00:15:36.144 Max: 64 00:15:36.144 Min: 64 00:15:36.144 Completion Queue Entry Size 00:15:36.144 Max: 16 00:15:36.144 Min: 16 00:15:36.144 Number of Namespaces: 32 00:15:36.144 Compare Command: Supported 00:15:36.144 Write Uncorrectable Command: Not Supported 00:15:36.144 Dataset Management Command: Supported 00:15:36.144 Write Zeroes Command: Supported 00:15:36.144 Set Features Save Field: Not Supported 00:15:36.144 Reservations: Not Supported 00:15:36.144 Timestamp: Not Supported 00:15:36.144 Copy: Supported 00:15:36.144 Volatile Write Cache: Present 00:15:36.144 Atomic Write Unit (Normal): 1 00:15:36.144 Atomic Write Unit (PFail): 1 00:15:36.144 Atomic Compare & Write Unit: 1 00:15:36.144 Fused Compare & Write: Supported 00:15:36.144 Scatter-Gather List 00:15:36.144 SGL Command Set: Supported (Dword aligned) 00:15:36.144 SGL Keyed: Not Supported 00:15:36.144 SGL Bit Bucket Descriptor: Not Supported 00:15:36.144 SGL Metadata Pointer: Not Supported 00:15:36.144 Oversized SGL: Not Supported 00:15:36.144 SGL Metadata Address: Not Supported 00:15:36.144 SGL Offset: Not Supported 00:15:36.144 Transport SGL Data Block: Not Supported 00:15:36.144 Replay Protected Memory Block: Not Supported 00:15:36.144 00:15:36.144 Firmware Slot Information 00:15:36.144 ========================= 00:15:36.144 Active slot: 1 00:15:36.144 Slot 1 Firmware Revision: 25.01 00:15:36.144 00:15:36.144 00:15:36.144 Commands Supported and Effects 00:15:36.144 ============================== 00:15:36.144 Admin Commands 00:15:36.144 -------------- 00:15:36.144 Get Log Page (02h): Supported 00:15:36.144 Identify (06h): Supported 00:15:36.144 Abort (08h): Supported 00:15:36.144 Set Features (09h): Supported 00:15:36.144 Get Features (0Ah): Supported 00:15:36.144 Asynchronous Event Request (0Ch): Supported 00:15:36.144 Keep Alive (18h): Supported 00:15:36.144 I/O Commands 00:15:36.144 ------------ 00:15:36.144 Flush (00h): Supported LBA-Change 00:15:36.144 Write (01h): Supported LBA-Change 00:15:36.144 Read (02h): Supported 00:15:36.144 Compare (05h): Supported 00:15:36.144 Write Zeroes (08h): Supported LBA-Change 00:15:36.144 Dataset Management (09h): Supported LBA-Change 00:15:36.144 Copy (19h): Supported LBA-Change 00:15:36.144 00:15:36.144 Error Log 00:15:36.144 ========= 00:15:36.144 00:15:36.144 Arbitration 00:15:36.144 =========== 00:15:36.144 Arbitration Burst: 1 00:15:36.144 00:15:36.144 Power Management 00:15:36.144 ================ 00:15:36.144 Number of Power States: 1 00:15:36.144 Current Power State: Power State #0 00:15:36.144 Power State #0: 00:15:36.144 Max Power: 0.00 W 00:15:36.144 Non-Operational State: Operational 00:15:36.144 Entry Latency: Not Reported 00:15:36.144 Exit Latency: Not Reported 00:15:36.144 Relative Read Throughput: 0 00:15:36.144 Relative Read Latency: 0 00:15:36.144 Relative Write Throughput: 0 00:15:36.144 Relative Write Latency: 0 00:15:36.144 Idle Power: Not Reported 00:15:36.144 Active Power: Not Reported 00:15:36.144 Non-Operational Permissive Mode: Not Supported 00:15:36.144 00:15:36.144 Health Information 00:15:36.144 ================== 00:15:36.144 Critical Warnings: 00:15:36.144 Available Spare Space: OK 00:15:36.144 Temperature: OK 00:15:36.144 Device Reliability: OK 00:15:36.144 Read Only: No 00:15:36.144 Volatile Memory Backup: OK 00:15:36.144 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:36.144 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:36.144 Available Spare: 0% 00:15:36.144 Available Sp[2024-10-01 16:40:27.675099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:36.144 [2024-10-01 16:40:27.682978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:36.144 [2024-10-01 16:40:27.683006] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:36.144 [2024-10-01 16:40:27.683015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-10-01 16:40:27.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-10-01 16:40:27.683027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-10-01 16:40:27.683033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-10-01 16:40:27.683071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:36.144 [2024-10-01 16:40:27.683081] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:36.144 [2024-10-01 16:40:27.684073] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.144 [2024-10-01 16:40:27.684120] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:36.144 [2024-10-01 16:40:27.684127] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:36.144 [2024-10-01 16:40:27.685077] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:36.144 [2024-10-01 16:40:27.685089] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:36.144 [2024-10-01 16:40:27.685143] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:36.144 [2024-10-01 16:40:27.687977] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.144 are Threshold: 0% 00:15:36.145 Life Percentage Used: 0% 00:15:36.145 Data Units Read: 0 00:15:36.145 Data Units Written: 0 00:15:36.145 Host Read Commands: 0 00:15:36.145 Host Write Commands: 0 00:15:36.145 Controller Busy Time: 0 minutes 00:15:36.145 Power Cycles: 0 00:15:36.145 Power On Hours: 0 hours 00:15:36.145 Unsafe Shutdowns: 0 00:15:36.145 Unrecoverable Media Errors: 0 00:15:36.145 Lifetime Error Log Entries: 0 00:15:36.145 Warning Temperature Time: 0 minutes 00:15:36.145 Critical Temperature Time: 0 minutes 00:15:36.145 00:15:36.145 Number of Queues 00:15:36.145 ================ 00:15:36.145 Number of I/O Submission Queues: 127 00:15:36.145 Number of I/O Completion Queues: 127 00:15:36.145 00:15:36.145 Active Namespaces 00:15:36.145 ================= 00:15:36.145 Namespace ID:1 00:15:36.145 Error Recovery Timeout: Unlimited 00:15:36.145 Command Set Identifier: NVM (00h) 00:15:36.145 Deallocate: Supported 00:15:36.145 Deallocated/Unwritten Error: Not Supported 00:15:36.145 Deallocated Read Value: Unknown 00:15:36.145 Deallocate in Write Zeroes: Not Supported 00:15:36.145 Deallocated Guard Field: 0xFFFF 00:15:36.145 Flush: Supported 00:15:36.145 Reservation: Supported 00:15:36.145 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.145 Size (in LBAs): 131072 (0GiB) 00:15:36.145 Capacity (in LBAs): 131072 (0GiB) 00:15:36.145 Utilization (in LBAs): 131072 (0GiB) 00:15:36.145 NGUID: 526D91CDB3D849C69548227EEE0DD5E4 00:15:36.145 UUID: 526d91cd-b3d8-49c6-9548-227eee0dd5e4 00:15:36.145 Thin Provisioning: Not Supported 00:15:36.145 Per-NS Atomic Units: Yes 00:15:36.145 Atomic Boundary Size (Normal): 0 00:15:36.145 Atomic Boundary Size (PFail): 0 00:15:36.145 Atomic Boundary Offset: 0 00:15:36.145 Maximum Single Source Range Length: 65535 00:15:36.145 Maximum Copy Length: 65535 00:15:36.145 Maximum Source Range Count: 1 00:15:36.145 NGUID/EUI64 Never Reused: No 00:15:36.145 Namespace Write Protected: No 00:15:36.145 Number of LBA Formats: 1 00:15:36.145 Current LBA Format: LBA Format #00 00:15:36.145 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.145 00:15:36.145 16:40:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:36.405 [2024-10-01 16:40:27.873876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.690 Initializing NVMe Controllers 00:15:41.690 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:41.690 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:41.690 Initialization complete. Launching workers. 00:15:41.690 ======================================================== 00:15:41.690 Latency(us) 00:15:41.690 Device Information : IOPS MiB/s Average min max 00:15:41.690 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39965.19 156.11 3202.65 869.58 6885.63 00:15:41.690 ======================================================== 00:15:41.690 Total : 39965.19 156.11 3202.65 869.58 6885.63 00:15:41.690 00:15:41.690 [2024-10-01 16:40:32.979161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.690 16:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:41.690 [2024-10-01 16:40:33.156696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.975 Initializing NVMe Controllers 00:15:46.975 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:46.975 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:46.975 Initialization complete. Launching workers. 00:15:46.975 ======================================================== 00:15:46.975 Latency(us) 00:15:46.975 Device Information : IOPS MiB/s Average min max 00:15:46.975 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39414.00 153.96 3247.87 1057.76 10463.34 00:15:46.975 ======================================================== 00:15:46.975 Total : 39414.00 153.96 3247.87 1057.76 10463.34 00:15:46.975 00:15:46.975 [2024-10-01 16:40:38.177846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.975 16:40:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:46.975 [2024-10-01 16:40:38.372231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.252 [2024-10-01 16:40:43.518064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.252 Initializing NVMe Controllers 00:15:52.252 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.252 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.252 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:52.252 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:52.252 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:52.252 Initialization complete. Launching workers. 00:15:52.252 Starting thread on core 2 00:15:52.252 Starting thread on core 3 00:15:52.252 Starting thread on core 1 00:15:52.252 16:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:52.252 [2024-10-01 16:40:43.780464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.549 [2024-10-01 16:40:46.832278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.549 Initializing NVMe Controllers 00:15:55.549 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.549 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.549 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:55.549 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:55.549 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:55.549 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:55.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:55.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:55.549 Initialization complete. Launching workers. 00:15:55.549 Starting thread on core 1 with urgent priority queue 00:15:55.549 Starting thread on core 2 with urgent priority queue 00:15:55.549 Starting thread on core 3 with urgent priority queue 00:15:55.549 Starting thread on core 0 with urgent priority queue 00:15:55.549 SPDK bdev Controller (SPDK2 ) core 0: 11332.67 IO/s 8.82 secs/100000 ios 00:15:55.549 SPDK bdev Controller (SPDK2 ) core 1: 11209.00 IO/s 8.92 secs/100000 ios 00:15:55.549 SPDK bdev Controller (SPDK2 ) core 2: 11881.67 IO/s 8.42 secs/100000 ios 00:15:55.549 SPDK bdev Controller (SPDK2 ) core 3: 9483.67 IO/s 10.54 secs/100000 ios 00:15:55.549 ======================================================== 00:15:55.549 00:15:55.549 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:55.549 [2024-10-01 16:40:47.085607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.549 Initializing NVMe Controllers 00:15:55.549 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.549 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.549 Namespace ID: 1 size: 0GB 00:15:55.549 Initialization complete. 00:15:55.549 INFO: using host memory buffer for IO 00:15:55.549 Hello world! 00:15:55.549 [2024-10-01 16:40:47.095676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.549 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:55.809 [2024-10-01 16:40:47.349118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.188 Initializing NVMe Controllers 00:15:57.188 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.188 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.188 Initialization complete. Launching workers. 00:15:57.188 submit (in ns) avg, min, max = 7392.4, 3646.2, 4002266.9 00:15:57.188 complete (in ns) avg, min, max = 18477.5, 2200.0, 6992822.3 00:15:57.188 00:15:57.188 Submit histogram 00:15:57.188 ================ 00:15:57.188 Range in us Cumulative Count 00:15:57.188 3.643 - 3.668: 0.7131% ( 135) 00:15:57.188 3.668 - 3.692: 6.5237% ( 1100) 00:15:57.188 3.692 - 3.717: 15.8523% ( 1766) 00:15:57.188 3.717 - 3.742: 26.2004% ( 1959) 00:15:57.188 3.742 - 3.766: 35.2755% ( 1718) 00:15:57.188 3.766 - 3.791: 46.4635% ( 2118) 00:15:57.188 3.791 - 3.815: 61.5868% ( 2863) 00:15:57.188 3.815 - 3.840: 77.3018% ( 2975) 00:15:57.188 3.840 - 3.865: 89.8790% ( 2381) 00:15:57.188 3.865 - 3.889: 96.5031% ( 1254) 00:15:57.188 3.889 - 3.914: 98.8273% ( 440) 00:15:57.188 3.914 - 3.938: 99.3397% ( 97) 00:15:57.188 3.938 - 3.963: 99.4718% ( 25) 00:15:57.188 3.963 - 3.988: 99.4876% ( 3) 00:15:57.188 3.988 - 4.012: 99.4929% ( 1) 00:15:57.188 4.012 - 4.037: 99.5035% ( 2) 00:15:57.188 4.037 - 4.062: 99.5140% ( 2) 00:15:57.188 4.062 - 4.086: 99.5193% ( 1) 00:15:57.188 4.086 - 4.111: 99.5246% ( 1) 00:15:57.188 4.185 - 4.209: 99.5299% ( 1) 00:15:57.188 4.332 - 4.357: 99.5352% ( 1) 00:15:57.188 5.489 - 5.514: 99.5404% ( 1) 00:15:57.188 5.785 - 5.809: 99.5457% ( 1) 00:15:57.188 6.302 - 6.351: 99.5510% ( 1) 00:15:57.189 6.597 - 6.646: 99.5563% ( 1) 00:15:57.189 6.695 - 6.745: 99.5616% ( 1) 00:15:57.189 6.794 - 6.843: 99.5668% ( 1) 00:15:57.189 6.991 - 7.040: 99.5721% ( 1) 00:15:57.189 7.040 - 7.089: 99.5880% ( 3) 00:15:57.189 7.089 - 7.138: 99.5933% ( 1) 00:15:57.189 7.138 - 7.188: 99.6038% ( 2) 00:15:57.189 7.188 - 7.237: 99.6144% ( 2) 00:15:57.189 7.237 - 7.286: 99.6250% ( 2) 00:15:57.189 7.286 - 7.335: 99.6302% ( 1) 00:15:57.189 7.335 - 7.385: 99.6355% ( 1) 00:15:57.189 7.385 - 7.434: 99.6514% ( 3) 00:15:57.189 7.434 - 7.483: 99.6566% ( 1) 00:15:57.189 7.483 - 7.532: 99.6672% ( 2) 00:15:57.189 7.532 - 7.582: 99.6831% ( 3) 00:15:57.189 7.582 - 7.631: 99.6989% ( 3) 00:15:57.189 7.631 - 7.680: 99.7095% ( 2) 00:15:57.189 7.680 - 7.729: 99.7148% ( 1) 00:15:57.189 7.729 - 7.778: 99.7253% ( 2) 00:15:57.189 7.778 - 7.828: 99.7359% ( 2) 00:15:57.189 7.926 - 7.975: 99.7412% ( 1) 00:15:57.189 7.975 - 8.025: 99.7517% ( 2) 00:15:57.189 8.025 - 8.074: 99.7623% ( 2) 00:15:57.189 8.074 - 8.123: 99.7729% ( 2) 00:15:57.189 8.123 - 8.172: 99.7940% ( 4) 00:15:57.189 8.172 - 8.222: 99.7993% ( 1) 00:15:57.189 8.222 - 8.271: 99.8098% ( 2) 00:15:57.189 8.271 - 8.320: 99.8257% ( 3) 00:15:57.189 8.369 - 8.418: 99.8310% ( 1) 00:15:57.189 8.517 - 8.566: 99.8362% ( 1) 00:15:57.189 8.566 - 8.615: 99.8415% ( 1) 00:15:57.189 8.665 - 8.714: 99.8468% ( 1) 00:15:57.189 8.714 - 8.763: 99.8521% ( 1) 00:15:57.189 8.763 - 8.812: 99.8574% ( 1) 00:15:57.189 8.960 - 9.009: 99.8627% ( 1) 00:15:57.189 9.058 - 9.108: 99.8679% ( 1) 00:15:57.189 9.108 - 9.157: 99.8732% ( 1) 00:15:57.189 9.305 - 9.354: 99.8838% ( 2) 00:15:57.189 9.649 - 9.698: 99.8891% ( 1) 00:15:57.189 9.698 - 9.748: 99.8944% ( 1) 00:15:57.189 10.092 - 10.142: 99.8996% ( 1) 00:15:57.189 10.388 - 10.437: 99.9049% ( 1) 00:15:57.189 11.471 - 11.520: 99.9102% ( 1) 00:15:57.189 3982.572 - 4007.778: 100.0000% ( 17) 00:15:57.189 00:15:57.189 Complete histogram 00:15:57.189 ================== 00:15:57.189 Range in us Cumulative Count 00:15:57.189 2.191 - 2.203: 0.0845% ( 16) 00:15:57.189 2.203 - 2.215: 0.9086% ( 156) 00:15:57.189 2.215 - 2.228: 1.0353% ( 24) 00:15:57.189 2.228 - 2.240: 7.3741% ( 1200) 00:15:57.189 2.240 - 2.252: 50.3196% ( 8130) 00:15:57.189 2.252 - 2.265: 59.1834% ( 1678) 00:15:57.189 2.265 - 2.277: 71.3433% ( 2302) 00:15:57.189 2.277 - 2.289: 79.2140% ( 1490) 00:15:57.189 2.289 - 2.302: 81.1262% ( 362) 00:15:57.189 2.302 - [2024-10-01 16:40:48.445347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.189 2.314: 83.8255% ( 511) 00:15:57.189 2.314 - 2.326: 89.0180% ( 983) 00:15:57.189 2.326 - 2.338: 94.2370% ( 988) 00:15:57.189 2.338 - 2.351: 96.8095% ( 487) 00:15:57.189 2.351 - 2.363: 98.3625% ( 294) 00:15:57.189 2.363 - 2.375: 99.0597% ( 132) 00:15:57.189 2.375 - 2.388: 99.2658% ( 39) 00:15:57.189 2.388 - 2.400: 99.3239% ( 11) 00:15:57.189 2.400 - 2.412: 99.3397% ( 3) 00:15:57.189 4.923 - 4.948: 99.3450% ( 1) 00:15:57.189 5.120 - 5.145: 99.3503% ( 1) 00:15:57.189 5.218 - 5.243: 99.3556% ( 1) 00:15:57.189 5.366 - 5.391: 99.3608% ( 1) 00:15:57.189 5.391 - 5.415: 99.3661% ( 1) 00:15:57.189 5.440 - 5.465: 99.3714% ( 1) 00:15:57.189 5.514 - 5.538: 99.3767% ( 1) 00:15:57.189 5.538 - 5.563: 99.3820% ( 1) 00:15:57.189 5.563 - 5.588: 99.3872% ( 1) 00:15:57.189 5.662 - 5.686: 99.4031% ( 3) 00:15:57.189 5.686 - 5.711: 99.4137% ( 2) 00:15:57.189 5.809 - 5.834: 99.4189% ( 1) 00:15:57.189 5.908 - 5.932: 99.4242% ( 1) 00:15:57.189 5.932 - 5.957: 99.4295% ( 1) 00:15:57.189 5.957 - 5.982: 99.4348% ( 1) 00:15:57.189 6.154 - 6.178: 99.4401% ( 1) 00:15:57.189 6.178 - 6.203: 99.4454% ( 1) 00:15:57.189 6.203 - 6.228: 99.4506% ( 1) 00:15:57.189 6.400 - 6.449: 99.4718% ( 4) 00:15:57.189 6.498 - 6.548: 99.4876% ( 3) 00:15:57.189 6.695 - 6.745: 99.5087% ( 4) 00:15:57.189 6.794 - 6.843: 99.5140% ( 1) 00:15:57.189 6.843 - 6.892: 99.5193% ( 1) 00:15:57.189 6.892 - 6.942: 99.5299% ( 2) 00:15:57.189 6.942 - 6.991: 99.5352% ( 1) 00:15:57.189 7.040 - 7.089: 99.5404% ( 1) 00:15:57.189 7.237 - 7.286: 99.5510% ( 2) 00:15:57.189 7.434 - 7.483: 99.5616% ( 2) 00:15:57.189 7.483 - 7.532: 99.5668% ( 1) 00:15:57.189 7.828 - 7.877: 99.5721% ( 1) 00:15:57.189 8.123 - 8.172: 99.5774% ( 1) 00:15:57.189 8.418 - 8.468: 99.5827% ( 1) 00:15:57.189 9.502 - 9.551: 99.5880% ( 1) 00:15:57.189 10.289 - 10.338: 99.5933% ( 1) 00:15:57.189 11.569 - 11.618: 99.5985% ( 1) 00:15:57.189 3982.572 - 4007.778: 99.9947% ( 75) 00:15:57.189 6956.898 - 7007.311: 100.0000% ( 1) 00:15:57.189 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.189 [ 00:15:57.189 { 00:15:57.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.189 "subtype": "Discovery", 00:15:57.189 "listen_addresses": [], 00:15:57.189 "allow_any_host": true, 00:15:57.189 "hosts": [] 00:15:57.189 }, 00:15:57.189 { 00:15:57.189 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.189 "subtype": "NVMe", 00:15:57.189 "listen_addresses": [ 00:15:57.189 { 00:15:57.189 "trtype": "VFIOUSER", 00:15:57.189 "adrfam": "IPv4", 00:15:57.189 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.189 "trsvcid": "0" 00:15:57.189 } 00:15:57.189 ], 00:15:57.189 "allow_any_host": true, 00:15:57.189 "hosts": [], 00:15:57.189 "serial_number": "SPDK1", 00:15:57.189 "model_number": "SPDK bdev Controller", 00:15:57.189 "max_namespaces": 32, 00:15:57.189 "min_cntlid": 1, 00:15:57.189 "max_cntlid": 65519, 00:15:57.189 "namespaces": [ 00:15:57.189 { 00:15:57.189 "nsid": 1, 00:15:57.189 "bdev_name": "Malloc1", 00:15:57.189 "name": "Malloc1", 00:15:57.189 "nguid": "BF444AEB60DC40AEAF0E4A13DC202B5B", 00:15:57.189 "uuid": "bf444aeb-60dc-40ae-af0e-4a13dc202b5b" 00:15:57.189 }, 00:15:57.189 { 00:15:57.189 "nsid": 2, 00:15:57.189 "bdev_name": "Malloc3", 00:15:57.189 "name": "Malloc3", 00:15:57.189 "nguid": "213FAD7BF112440AB706703DAED6596F", 00:15:57.189 "uuid": "213fad7b-f112-440a-b706-703daed6596f" 00:15:57.189 } 00:15:57.189 ] 00:15:57.189 }, 00:15:57.189 { 00:15:57.189 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.189 "subtype": "NVMe", 00:15:57.189 "listen_addresses": [ 00:15:57.189 { 00:15:57.189 "trtype": "VFIOUSER", 00:15:57.189 "adrfam": "IPv4", 00:15:57.189 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.189 "trsvcid": "0" 00:15:57.189 } 00:15:57.189 ], 00:15:57.189 "allow_any_host": true, 00:15:57.189 "hosts": [], 00:15:57.189 "serial_number": "SPDK2", 00:15:57.189 "model_number": "SPDK bdev Controller", 00:15:57.189 "max_namespaces": 32, 00:15:57.189 "min_cntlid": 1, 00:15:57.189 "max_cntlid": 65519, 00:15:57.189 "namespaces": [ 00:15:57.189 { 00:15:57.189 "nsid": 1, 00:15:57.189 "bdev_name": "Malloc2", 00:15:57.189 "name": "Malloc2", 00:15:57.189 "nguid": "526D91CDB3D849C69548227EEE0DD5E4", 00:15:57.189 "uuid": "526d91cd-b3d8-49c6-9548-227eee0dd5e4" 00:15:57.189 } 00:15:57.189 ] 00:15:57.189 } 00:15:57.189 ] 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2657589 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:57.189 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.190 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:57.190 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:15:57.190 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:57.190 [2024-10-01 16:40:48.832372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.456 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.456 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.456 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:57.456 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.456 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:57.456 Malloc4 00:15:57.456 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:57.715 [2024-10-01 16:40:49.283519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.715 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.715 Asynchronous Event Request test 00:15:57.715 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.715 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:57.715 Registering asynchronous event callbacks... 00:15:57.715 Starting namespace attribute notice tests for all controllers... 00:15:57.715 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:57.715 aer_cb - Changed Namespace 00:15:57.715 Cleaning up... 00:15:57.974 [ 00:15:57.974 { 00:15:57.974 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.974 "subtype": "Discovery", 00:15:57.974 "listen_addresses": [], 00:15:57.974 "allow_any_host": true, 00:15:57.974 "hosts": [] 00:15:57.974 }, 00:15:57.974 { 00:15:57.974 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.974 "subtype": "NVMe", 00:15:57.974 "listen_addresses": [ 00:15:57.974 { 00:15:57.974 "trtype": "VFIOUSER", 00:15:57.975 "adrfam": "IPv4", 00:15:57.975 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.975 "trsvcid": "0" 00:15:57.975 } 00:15:57.975 ], 00:15:57.975 "allow_any_host": true, 00:15:57.975 "hosts": [], 00:15:57.975 "serial_number": "SPDK1", 00:15:57.975 "model_number": "SPDK bdev Controller", 00:15:57.975 "max_namespaces": 32, 00:15:57.975 "min_cntlid": 1, 00:15:57.975 "max_cntlid": 65519, 00:15:57.975 "namespaces": [ 00:15:57.975 { 00:15:57.975 "nsid": 1, 00:15:57.975 "bdev_name": "Malloc1", 00:15:57.975 "name": "Malloc1", 00:15:57.975 "nguid": "BF444AEB60DC40AEAF0E4A13DC202B5B", 00:15:57.975 "uuid": "bf444aeb-60dc-40ae-af0e-4a13dc202b5b" 00:15:57.975 }, 00:15:57.975 { 00:15:57.975 "nsid": 2, 00:15:57.975 "bdev_name": "Malloc3", 00:15:57.975 "name": "Malloc3", 00:15:57.975 "nguid": "213FAD7BF112440AB706703DAED6596F", 00:15:57.975 "uuid": "213fad7b-f112-440a-b706-703daed6596f" 00:15:57.975 } 00:15:57.975 ] 00:15:57.975 }, 00:15:57.975 { 00:15:57.975 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.975 "subtype": "NVMe", 00:15:57.975 "listen_addresses": [ 00:15:57.975 { 00:15:57.975 "trtype": "VFIOUSER", 00:15:57.975 "adrfam": "IPv4", 00:15:57.975 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.975 "trsvcid": "0" 00:15:57.975 } 00:15:57.975 ], 00:15:57.975 "allow_any_host": true, 00:15:57.975 "hosts": [], 00:15:57.975 "serial_number": "SPDK2", 00:15:57.975 "model_number": "SPDK bdev Controller", 00:15:57.975 "max_namespaces": 32, 00:15:57.975 "min_cntlid": 1, 00:15:57.975 "max_cntlid": 65519, 00:15:57.975 "namespaces": [ 00:15:57.975 { 00:15:57.975 "nsid": 1, 00:15:57.975 "bdev_name": "Malloc2", 00:15:57.975 "name": "Malloc2", 00:15:57.975 "nguid": "526D91CDB3D849C69548227EEE0DD5E4", 00:15:57.975 "uuid": "526d91cd-b3d8-49c6-9548-227eee0dd5e4" 00:15:57.975 }, 00:15:57.975 { 00:15:57.975 "nsid": 2, 00:15:57.975 "bdev_name": "Malloc4", 00:15:57.975 "name": "Malloc4", 00:15:57.975 "nguid": "FB845A82437F4EBF902D1AF966CAD75F", 00:15:57.975 "uuid": "fb845a82-437f-4ebf-902d-1af966cad75f" 00:15:57.975 } 00:15:57.975 ] 00:15:57.975 } 00:15:57.975 ] 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2657589 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2649931 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2649931 ']' 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2649931 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2649931 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2649931' 00:15:57.975 killing process with pid 2649931 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2649931 00:15:57.975 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2649931 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2657630 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2657630' 00:15:58.235 Process pid: 2657630 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2657630 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2657630 ']' 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.235 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 [2024-10-01 16:40:49.799843] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:58.235 [2024-10-01 16:40:49.800725] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:15:58.235 [2024-10-01 16:40:49.800766] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.235 [2024-10-01 16:40:49.878556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.496 [2024-10-01 16:40:49.946035] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.496 [2024-10-01 16:40:49.946074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.496 [2024-10-01 16:40:49.946081] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.496 [2024-10-01 16:40:49.946088] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.496 [2024-10-01 16:40:49.946093] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.496 [2024-10-01 16:40:49.946216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.496 [2024-10-01 16:40:49.946347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.496 [2024-10-01 16:40:49.946468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.496 [2024-10-01 16:40:49.946471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.496 [2024-10-01 16:40:50.011956] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:58.496 [2024-10-01 16:40:50.012104] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:58.496 [2024-10-01 16:40:50.012241] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:58.496 [2024-10-01 16:40:50.012475] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:58.496 [2024-10-01 16:40:50.012695] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:58.496 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.496 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:58.496 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:59.434 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:59.694 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:59.694 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:59.694 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:59.694 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:59.694 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:59.954 Malloc1 00:15:59.954 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:00.214 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:00.473 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:00.473 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.473 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:00.473 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:00.800 Malloc2 00:16:00.800 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:01.089 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:01.089 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2657630 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2657630 ']' 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2657630 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.374 16:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2657630 00:16:01.374 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.374 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.374 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2657630' 00:16:01.374 killing process with pid 2657630 00:16:01.374 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2657630 00:16:01.374 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2657630 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:01.634 00:16:01.634 real 0m50.719s 00:16:01.634 user 3m16.051s 00:16:01.634 sys 0m2.807s 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:01.634 ************************************ 00:16:01.634 END TEST nvmf_vfio_user 00:16:01.634 ************************************ 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.634 ************************************ 00:16:01.634 START TEST nvmf_vfio_user_nvme_compliance 00:16:01.634 ************************************ 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:01.634 * Looking for test storage... 00:16:01.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:16:01.634 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:01.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.895 --rc genhtml_branch_coverage=1 00:16:01.895 --rc genhtml_function_coverage=1 00:16:01.895 --rc genhtml_legend=1 00:16:01.895 --rc geninfo_all_blocks=1 00:16:01.895 --rc geninfo_unexecuted_blocks=1 00:16:01.895 00:16:01.895 ' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:01.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.895 --rc genhtml_branch_coverage=1 00:16:01.895 --rc genhtml_function_coverage=1 00:16:01.895 --rc genhtml_legend=1 00:16:01.895 --rc geninfo_all_blocks=1 00:16:01.895 --rc geninfo_unexecuted_blocks=1 00:16:01.895 00:16:01.895 ' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:01.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.895 --rc genhtml_branch_coverage=1 00:16:01.895 --rc genhtml_function_coverage=1 00:16:01.895 --rc genhtml_legend=1 00:16:01.895 --rc geninfo_all_blocks=1 00:16:01.895 --rc geninfo_unexecuted_blocks=1 00:16:01.895 00:16:01.895 ' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:01.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.895 --rc genhtml_branch_coverage=1 00:16:01.895 --rc genhtml_function_coverage=1 00:16:01.895 --rc genhtml_legend=1 00:16:01.895 --rc geninfo_all_blocks=1 00:16:01.895 --rc geninfo_unexecuted_blocks=1 00:16:01.895 00:16:01.895 ' 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:01.895 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2658376 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2658376' 00:16:01.896 Process pid: 2658376 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2658376 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2658376 ']' 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.896 16:40:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:01.896 [2024-10-01 16:40:53.475907] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:16:01.896 [2024-10-01 16:40:53.475968] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.896 [2024-10-01 16:40:53.553125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.157 [2024-10-01 16:40:53.615861] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.157 [2024-10-01 16:40:53.615896] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.157 [2024-10-01 16:40:53.615904] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.157 [2024-10-01 16:40:53.615910] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.157 [2024-10-01 16:40:53.615916] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.157 [2024-10-01 16:40:53.615975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.157 [2024-10-01 16:40:53.616109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.157 [2024-10-01 16:40:53.616246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.726 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.726 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:02.726 16:40:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:03.665 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:03.665 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:03.665 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:03.665 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.665 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.925 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.926 malloc0 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.926 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:03.926 00:16:03.926 00:16:03.926 CUnit - A unit testing framework for C - Version 2.1-3 00:16:03.926 http://cunit.sourceforge.net/ 00:16:03.926 00:16:03.926 00:16:03.926 Suite: nvme_compliance 00:16:03.926 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 16:40:55.585379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.926 [2024-10-01 16:40:55.586707] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:03.926 [2024-10-01 16:40:55.586718] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:03.926 [2024-10-01 16:40:55.586723] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:03.926 [2024-10-01 16:40:55.588392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.185 passed 00:16:04.185 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 16:40:55.679948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.185 [2024-10-01 16:40:55.682965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.185 passed 00:16:04.185 Test: admin_identify_ns ...[2024-10-01 16:40:55.773506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.185 [2024-10-01 16:40:55.832979] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:04.185 [2024-10-01 16:40:55.840979] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:04.185 [2024-10-01 16:40:55.862081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.445 passed 00:16:04.445 Test: admin_get_features_mandatory_features ...[2024-10-01 16:40:55.952840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.445 [2024-10-01 16:40:55.955855] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.445 passed 00:16:04.445 Test: admin_get_features_optional_features ...[2024-10-01 16:40:56.045395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.445 [2024-10-01 16:40:56.048410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.445 passed 00:16:04.705 Test: admin_set_features_number_of_queues ...[2024-10-01 16:40:56.137502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.705 [2024-10-01 16:40:56.242058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.705 passed 00:16:04.705 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 16:40:56.330394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.705 [2024-10-01 16:40:56.333410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.705 passed 00:16:04.965 Test: admin_get_log_page_with_lpo ...[2024-10-01 16:40:56.423700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.965 [2024-10-01 16:40:56.490980] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:04.965 [2024-10-01 16:40:56.504035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.965 passed 00:16:04.965 Test: fabric_property_get ...[2024-10-01 16:40:56.592358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.965 [2024-10-01 16:40:56.593603] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:04.965 [2024-10-01 16:40:56.595379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:04.965 passed 00:16:05.224 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 16:40:56.684987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.224 [2024-10-01 16:40:56.686236] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:05.224 [2024-10-01 16:40:56.688003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.224 passed 00:16:05.224 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 16:40:56.776516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.224 [2024-10-01 16:40:56.859976] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.224 [2024-10-01 16:40:56.875978] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.224 [2024-10-01 16:40:56.881064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.484 passed 00:16:05.484 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 16:40:56.971860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.484 [2024-10-01 16:40:56.973099] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:05.484 [2024-10-01 16:40:56.974881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.484 passed 00:16:05.484 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 16:40:57.063532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.484 [2024-10-01 16:40:57.137976] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:05.484 [2024-10-01 16:40:57.162009] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:05.745 [2024-10-01 16:40:57.167051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.745 passed 00:16:05.745 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 16:40:57.255389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.745 [2024-10-01 16:40:57.256633] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:05.745 [2024-10-01 16:40:57.256652] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:05.745 [2024-10-01 16:40:57.258417] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.745 passed 00:16:05.745 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 16:40:57.347527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.005 [2024-10-01 16:40:57.438975] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:06.005 [2024-10-01 16:40:57.446976] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:06.005 [2024-10-01 16:40:57.454977] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:06.005 [2024-10-01 16:40:57.462976] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:06.005 [2024-10-01 16:40:57.492059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.005 passed 00:16:06.005 Test: admin_create_io_sq_verify_pc ...[2024-10-01 16:40:57.580360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:06.005 [2024-10-01 16:40:57.598983] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:06.005 [2024-10-01 16:40:57.616536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:06.005 passed 00:16:06.265 Test: admin_create_io_qp_max_qps ...[2024-10-01 16:40:57.704043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.204 [2024-10-01 16:40:58.817980] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:07.775 [2024-10-01 16:40:59.196974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.775 passed 00:16:07.775 Test: admin_create_io_sq_shared_cq ...[2024-10-01 16:40:59.288319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.775 [2024-10-01 16:40:59.419978] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:07.775 [2024-10-01 16:40:59.457032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.035 passed 00:16:08.035 00:16:08.035 Run Summary: Type Total Ran Passed Failed Inactive 00:16:08.035 suites 1 1 n/a 0 0 00:16:08.035 tests 18 18 18 0 0 00:16:08.035 asserts 360 360 360 0 n/a 00:16:08.035 00:16:08.035 Elapsed time = 1.613 seconds 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2658376 ']' 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2658376' 00:16:08.035 killing process with pid 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2658376 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:08.035 00:16:08.035 real 0m6.477s 00:16:08.035 user 0m18.509s 00:16:08.035 sys 0m0.497s 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.035 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:08.035 ************************************ 00:16:08.035 END TEST nvmf_vfio_user_nvme_compliance 00:16:08.035 ************************************ 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.295 ************************************ 00:16:08.295 START TEST nvmf_vfio_user_fuzz 00:16:08.295 ************************************ 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:08.295 * Looking for test storage... 00:16:08.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.295 --rc genhtml_branch_coverage=1 00:16:08.295 --rc genhtml_function_coverage=1 00:16:08.295 --rc genhtml_legend=1 00:16:08.295 --rc geninfo_all_blocks=1 00:16:08.295 --rc geninfo_unexecuted_blocks=1 00:16:08.295 00:16:08.295 ' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.295 --rc genhtml_branch_coverage=1 00:16:08.295 --rc genhtml_function_coverage=1 00:16:08.295 --rc genhtml_legend=1 00:16:08.295 --rc geninfo_all_blocks=1 00:16:08.295 --rc geninfo_unexecuted_blocks=1 00:16:08.295 00:16:08.295 ' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.295 --rc genhtml_branch_coverage=1 00:16:08.295 --rc genhtml_function_coverage=1 00:16:08.295 --rc genhtml_legend=1 00:16:08.295 --rc geninfo_all_blocks=1 00:16:08.295 --rc geninfo_unexecuted_blocks=1 00:16:08.295 00:16:08.295 ' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.295 --rc genhtml_branch_coverage=1 00:16:08.295 --rc genhtml_function_coverage=1 00:16:08.295 --rc genhtml_legend=1 00:16:08.295 --rc geninfo_all_blocks=1 00:16:08.295 --rc geninfo_unexecuted_blocks=1 00:16:08.295 00:16:08.295 ' 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.295 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2659583 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2659583' 00:16:08.296 Process pid: 2659583 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2659583 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2659583 ']' 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.296 16:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:09.235 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.235 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:09.235 16:41:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.177 malloc0 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.177 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:10.437 16:41:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:42.539 Fuzzing completed. Shutting down the fuzz application 00:16:42.539 00:16:42.539 Dumping successful admin opcodes: 00:16:42.539 8, 9, 10, 24, 00:16:42.539 Dumping successful io opcodes: 00:16:42.539 0, 00:16:42.540 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1122522, total successful commands: 4420, random_seed: 3980850368 00:16:42.540 NS: 0x200003a1ef00 admin qp, Total commands completed: 141421, total successful commands: 1148, random_seed: 101088704 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2659583 ']' 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2659583' 00:16:42.540 killing process with pid 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2659583 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:42.540 00:16:42.540 real 0m32.970s 00:16:42.540 user 0m36.186s 00:16:42.540 sys 0m25.667s 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:42.540 ************************************ 00:16:42.540 END TEST nvmf_vfio_user_fuzz 00:16:42.540 ************************************ 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.540 ************************************ 00:16:42.540 START TEST nvmf_auth_target 00:16:42.540 ************************************ 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:42.540 * Looking for test storage... 00:16:42.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:16:42.540 16:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:42.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.540 --rc genhtml_branch_coverage=1 00:16:42.540 --rc genhtml_function_coverage=1 00:16:42.540 --rc genhtml_legend=1 00:16:42.540 --rc geninfo_all_blocks=1 00:16:42.540 --rc geninfo_unexecuted_blocks=1 00:16:42.540 00:16:42.540 ' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:42.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.540 --rc genhtml_branch_coverage=1 00:16:42.540 --rc genhtml_function_coverage=1 00:16:42.540 --rc genhtml_legend=1 00:16:42.540 --rc geninfo_all_blocks=1 00:16:42.540 --rc geninfo_unexecuted_blocks=1 00:16:42.540 00:16:42.540 ' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:42.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.540 --rc genhtml_branch_coverage=1 00:16:42.540 --rc genhtml_function_coverage=1 00:16:42.540 --rc genhtml_legend=1 00:16:42.540 --rc geninfo_all_blocks=1 00:16:42.540 --rc geninfo_unexecuted_blocks=1 00:16:42.540 00:16:42.540 ' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:42.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.540 --rc genhtml_branch_coverage=1 00:16:42.540 --rc genhtml_function_coverage=1 00:16:42.540 --rc genhtml_legend=1 00:16:42.540 --rc geninfo_all_blocks=1 00:16:42.540 --rc geninfo_unexecuted_blocks=1 00:16:42.540 00:16:42.540 ' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.540 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:42.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:42.541 16:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:49.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.123 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:49.124 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:49.124 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:49.124 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.124 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:49.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:16:49.124 00:16:49.124 --- 10.0.0.2 ping statistics --- 00:16:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.124 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:16:49.124 00:16:49.124 --- 10.0.0.1 ping statistics --- 00:16:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.124 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2668618 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2668618 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2668618 ']' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.124 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2668812 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a6eeb39a458529a8095406a94b241afb1db9e2c55bc0b824 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.iSo 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a6eeb39a458529a8095406a94b241afb1db9e2c55bc0b824 0 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a6eeb39a458529a8095406a94b241afb1db9e2c55bc0b824 0 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a6eeb39a458529a8095406a94b241afb1db9e2c55bc0b824 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.iSo 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.iSo 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.iSo 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=09ef5ed1770dcafb78e3e6366bd19bd1df0e50ac5ab628f570b6aad8755b4a68 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.eCC 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 09ef5ed1770dcafb78e3e6366bd19bd1df0e50ac5ab628f570b6aad8755b4a68 3 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 09ef5ed1770dcafb78e3e6366bd19bd1df0e50ac5ab628f570b6aad8755b4a68 3 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=09ef5ed1770dcafb78e3e6366bd19bd1df0e50ac5ab628f570b6aad8755b4a68 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.eCC 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.eCC 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.eCC 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8e84c7e5fa9aecf147e386dd69322316 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.73r 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8e84c7e5fa9aecf147e386dd69322316 1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8e84c7e5fa9aecf147e386dd69322316 1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8e84c7e5fa9aecf147e386dd69322316 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:49.697 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.73r 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.73r 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.73r 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5e971077ff8a3e1e93ccfb282aaac38655f8b5e077a03785 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.G1m 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5e971077ff8a3e1e93ccfb282aaac38655f8b5e077a03785 2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5e971077ff8a3e1e93ccfb282aaac38655f8b5e077a03785 2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5e971077ff8a3e1e93ccfb282aaac38655f8b5e077a03785 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.G1m 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.G1m 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.G1m 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ec8f922a24967b7fba1c6caabec0662af9702ac27dc7b294 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.4Sk 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ec8f922a24967b7fba1c6caabec0662af9702ac27dc7b294 2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ec8f922a24967b7fba1c6caabec0662af9702ac27dc7b294 2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ec8f922a24967b7fba1c6caabec0662af9702ac27dc7b294 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.4Sk 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.4Sk 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.4Sk 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=84ce2355c1085cf942831552ae1f603f 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Eu3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 84ce2355c1085cf942831552ae1f603f 1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 84ce2355c1085cf942831552ae1f603f 1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=84ce2355c1085cf942831552ae1f603f 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Eu3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Eu3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Eu3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ac90b1976c0939349bf2fa8b4368edb6ba568717e48f720e4a504a2d5694f431 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.5AM 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ac90b1976c0939349bf2fa8b4368edb6ba568717e48f720e4a504a2d5694f431 3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ac90b1976c0939349bf2fa8b4368edb6ba568717e48f720e4a504a2d5694f431 3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ac90b1976c0939349bf2fa8b4368edb6ba568717e48f720e4a504a2d5694f431 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:49.959 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.5AM 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.5AM 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.5AM 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2668618 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2668618 ']' 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2668812 /var/tmp/host.sock 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2668812 ']' 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.221 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:50.222 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.222 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iSo 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iSo 00:16:50.483 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iSo 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.eCC ]] 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCC 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCC 00:16:50.743 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCC 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.73r 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.73r 00:16:51.004 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.73r 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.G1m ]] 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G1m 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G1m 00:16:51.265 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G1m 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sk 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sk 00:16:51.526 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sk 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Eu3 ]] 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eu3 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eu3 00:16:51.786 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eu3 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5AM 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5AM 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5AM 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.047 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.308 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.567 00:16:52.567 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.567 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.567 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.827 { 00:16:52.827 "cntlid": 1, 00:16:52.827 "qid": 0, 00:16:52.827 "state": "enabled", 00:16:52.827 "thread": "nvmf_tgt_poll_group_000", 00:16:52.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:16:52.827 "listen_address": { 00:16:52.827 "trtype": "TCP", 00:16:52.827 "adrfam": "IPv4", 00:16:52.827 "traddr": "10.0.0.2", 00:16:52.827 "trsvcid": "4420" 00:16:52.827 }, 00:16:52.827 "peer_address": { 00:16:52.827 "trtype": "TCP", 00:16:52.827 "adrfam": "IPv4", 00:16:52.827 "traddr": "10.0.0.1", 00:16:52.827 "trsvcid": "39858" 00:16:52.827 }, 00:16:52.827 "auth": { 00:16:52.827 "state": "completed", 00:16:52.827 "digest": "sha256", 00:16:52.827 "dhgroup": "null" 00:16:52.827 } 00:16:52.827 } 00:16:52.827 ]' 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.827 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.086 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:53.087 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.087 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.087 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.087 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.347 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:16:53.347 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.547 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.547 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.807 00:16:57.807 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.807 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.807 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.067 { 00:16:58.067 "cntlid": 3, 00:16:58.067 "qid": 0, 00:16:58.067 "state": "enabled", 00:16:58.067 "thread": "nvmf_tgt_poll_group_000", 00:16:58.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:16:58.067 "listen_address": { 00:16:58.067 "trtype": "TCP", 00:16:58.067 "adrfam": "IPv4", 00:16:58.067 "traddr": "10.0.0.2", 00:16:58.067 "trsvcid": "4420" 00:16:58.067 }, 00:16:58.067 "peer_address": { 00:16:58.067 "trtype": "TCP", 00:16:58.067 "adrfam": "IPv4", 00:16:58.067 "traddr": "10.0.0.1", 00:16:58.067 "trsvcid": "39868" 00:16:58.067 }, 00:16:58.067 "auth": { 00:16:58.067 "state": "completed", 00:16:58.067 "digest": "sha256", 00:16:58.067 "dhgroup": "null" 00:16:58.067 } 00:16:58.067 } 00:16:58.067 ]' 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.067 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.327 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.327 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.327 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.327 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.327 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.587 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:16:58.587 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.189 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.449 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.709 00:16:59.709 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.709 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.709 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.969 { 00:16:59.969 "cntlid": 5, 00:16:59.969 "qid": 0, 00:16:59.969 "state": "enabled", 00:16:59.969 "thread": "nvmf_tgt_poll_group_000", 00:16:59.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:16:59.969 "listen_address": { 00:16:59.969 "trtype": "TCP", 00:16:59.969 "adrfam": "IPv4", 00:16:59.969 "traddr": "10.0.0.2", 00:16:59.969 "trsvcid": "4420" 00:16:59.969 }, 00:16:59.969 "peer_address": { 00:16:59.969 "trtype": "TCP", 00:16:59.969 "adrfam": "IPv4", 00:16:59.969 "traddr": "10.0.0.1", 00:16:59.969 "trsvcid": "42082" 00:16:59.969 }, 00:16:59.969 "auth": { 00:16:59.969 "state": "completed", 00:16:59.969 "digest": "sha256", 00:16:59.969 "dhgroup": "null" 00:16:59.969 } 00:16:59.969 } 00:16:59.969 ]' 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.969 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.228 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.228 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.229 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.229 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:00.229 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.170 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.430 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.690 00:17:01.690 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.690 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.690 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.949 { 00:17:01.949 "cntlid": 7, 00:17:01.949 "qid": 0, 00:17:01.949 "state": "enabled", 00:17:01.949 "thread": "nvmf_tgt_poll_group_000", 00:17:01.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:01.949 "listen_address": { 00:17:01.949 "trtype": "TCP", 00:17:01.949 "adrfam": "IPv4", 00:17:01.949 "traddr": "10.0.0.2", 00:17:01.949 "trsvcid": "4420" 00:17:01.949 }, 00:17:01.949 "peer_address": { 00:17:01.949 "trtype": "TCP", 00:17:01.949 "adrfam": "IPv4", 00:17:01.949 "traddr": "10.0.0.1", 00:17:01.949 "trsvcid": "42116" 00:17:01.949 }, 00:17:01.949 "auth": { 00:17:01.949 "state": "completed", 00:17:01.949 "digest": "sha256", 00:17:01.949 "dhgroup": "null" 00:17:01.949 } 00:17:01.949 } 00:17:01.949 ]' 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.949 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.209 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:02.209 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.149 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.150 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.150 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.150 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.409 00:17:03.409 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.409 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.409 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.669 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.669 { 00:17:03.669 "cntlid": 9, 00:17:03.669 "qid": 0, 00:17:03.669 "state": "enabled", 00:17:03.669 "thread": "nvmf_tgt_poll_group_000", 00:17:03.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:03.669 "listen_address": { 00:17:03.669 "trtype": "TCP", 00:17:03.669 "adrfam": "IPv4", 00:17:03.669 "traddr": "10.0.0.2", 00:17:03.669 "trsvcid": "4420" 00:17:03.669 }, 00:17:03.669 "peer_address": { 00:17:03.669 "trtype": "TCP", 00:17:03.669 "adrfam": "IPv4", 00:17:03.669 "traddr": "10.0.0.1", 00:17:03.669 "trsvcid": "42132" 00:17:03.669 }, 00:17:03.669 "auth": { 00:17:03.669 "state": "completed", 00:17:03.669 "digest": "sha256", 00:17:03.670 "dhgroup": "ffdhe2048" 00:17:03.670 } 00:17:03.670 } 00:17:03.670 ]' 00:17:03.670 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.670 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.670 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.670 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.670 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.929 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.929 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.929 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.929 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:03.929 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.870 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.130 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.130 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.130 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.130 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.391 00:17:05.391 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.391 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.391 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.391 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.391 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.391 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.391 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.391 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.652 { 00:17:05.652 "cntlid": 11, 00:17:05.652 "qid": 0, 00:17:05.652 "state": "enabled", 00:17:05.652 "thread": "nvmf_tgt_poll_group_000", 00:17:05.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:05.652 "listen_address": { 00:17:05.652 "trtype": "TCP", 00:17:05.652 "adrfam": "IPv4", 00:17:05.652 "traddr": "10.0.0.2", 00:17:05.652 "trsvcid": "4420" 00:17:05.652 }, 00:17:05.652 "peer_address": { 00:17:05.652 "trtype": "TCP", 00:17:05.652 "adrfam": "IPv4", 00:17:05.652 "traddr": "10.0.0.1", 00:17:05.652 "trsvcid": "42164" 00:17:05.652 }, 00:17:05.652 "auth": { 00:17:05.652 "state": "completed", 00:17:05.652 "digest": "sha256", 00:17:05.652 "dhgroup": "ffdhe2048" 00:17:05.652 } 00:17:05.652 } 00:17:05.652 ]' 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.652 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.912 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:05.912 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:06.482 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.482 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:06.482 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.482 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.742 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.002 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.263 { 00:17:07.263 "cntlid": 13, 00:17:07.263 "qid": 0, 00:17:07.263 "state": "enabled", 00:17:07.263 "thread": "nvmf_tgt_poll_group_000", 00:17:07.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:07.263 "listen_address": { 00:17:07.263 "trtype": "TCP", 00:17:07.263 "adrfam": "IPv4", 00:17:07.263 "traddr": "10.0.0.2", 00:17:07.263 "trsvcid": "4420" 00:17:07.263 }, 00:17:07.263 "peer_address": { 00:17:07.263 "trtype": "TCP", 00:17:07.263 "adrfam": "IPv4", 00:17:07.263 "traddr": "10.0.0.1", 00:17:07.263 "trsvcid": "42188" 00:17:07.263 }, 00:17:07.263 "auth": { 00:17:07.263 "state": "completed", 00:17:07.263 "digest": "sha256", 00:17:07.263 "dhgroup": "ffdhe2048" 00:17:07.263 } 00:17:07.263 } 00:17:07.263 ]' 00:17:07.263 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.529 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.529 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.529 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.529 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.529 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.529 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.529 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.792 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:07.792 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:08.362 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.362 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.622 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.623 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.883 00:17:08.883 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.883 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.883 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.143 { 00:17:09.143 "cntlid": 15, 00:17:09.143 "qid": 0, 00:17:09.143 "state": "enabled", 00:17:09.143 "thread": "nvmf_tgt_poll_group_000", 00:17:09.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:09.143 "listen_address": { 00:17:09.143 "trtype": "TCP", 00:17:09.143 "adrfam": "IPv4", 00:17:09.143 "traddr": "10.0.0.2", 00:17:09.143 "trsvcid": "4420" 00:17:09.143 }, 00:17:09.143 "peer_address": { 00:17:09.143 "trtype": "TCP", 00:17:09.143 "adrfam": "IPv4", 00:17:09.143 "traddr": "10.0.0.1", 00:17:09.143 "trsvcid": "43012" 00:17:09.143 }, 00:17:09.143 "auth": { 00:17:09.143 "state": "completed", 00:17:09.143 "digest": "sha256", 00:17:09.143 "dhgroup": "ffdhe2048" 00:17:09.143 } 00:17:09.143 } 00:17:09.143 ]' 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.143 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.404 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.404 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.404 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.404 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.404 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.665 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:09.665 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.236 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.497 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.757 00:17:10.757 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.757 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.757 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.018 { 00:17:11.018 "cntlid": 17, 00:17:11.018 "qid": 0, 00:17:11.018 "state": "enabled", 00:17:11.018 "thread": "nvmf_tgt_poll_group_000", 00:17:11.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:11.018 "listen_address": { 00:17:11.018 "trtype": "TCP", 00:17:11.018 "adrfam": "IPv4", 00:17:11.018 "traddr": "10.0.0.2", 00:17:11.018 "trsvcid": "4420" 00:17:11.018 }, 00:17:11.018 "peer_address": { 00:17:11.018 "trtype": "TCP", 00:17:11.018 "adrfam": "IPv4", 00:17:11.018 "traddr": "10.0.0.1", 00:17:11.018 "trsvcid": "43036" 00:17:11.018 }, 00:17:11.018 "auth": { 00:17:11.018 "state": "completed", 00:17:11.018 "digest": "sha256", 00:17:11.018 "dhgroup": "ffdhe3072" 00:17:11.018 } 00:17:11.018 } 00:17:11.018 ]' 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.018 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.278 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.278 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.278 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.278 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:11.278 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.219 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.483 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.782 00:17:12.782 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.782 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.782 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.093 { 00:17:13.093 "cntlid": 19, 00:17:13.093 "qid": 0, 00:17:13.093 "state": "enabled", 00:17:13.093 "thread": "nvmf_tgt_poll_group_000", 00:17:13.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:13.093 "listen_address": { 00:17:13.093 "trtype": "TCP", 00:17:13.093 "adrfam": "IPv4", 00:17:13.093 "traddr": "10.0.0.2", 00:17:13.093 "trsvcid": "4420" 00:17:13.093 }, 00:17:13.093 "peer_address": { 00:17:13.093 "trtype": "TCP", 00:17:13.093 "adrfam": "IPv4", 00:17:13.093 "traddr": "10.0.0.1", 00:17:13.093 "trsvcid": "43056" 00:17:13.093 }, 00:17:13.093 "auth": { 00:17:13.093 "state": "completed", 00:17:13.093 "digest": "sha256", 00:17:13.093 "dhgroup": "ffdhe3072" 00:17:13.093 } 00:17:13.093 } 00:17:13.093 ]' 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.093 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.094 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.094 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.094 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.094 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.094 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.379 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:13.379 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.949 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.209 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.470 00:17:14.470 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.470 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.470 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.730 { 00:17:14.730 "cntlid": 21, 00:17:14.730 "qid": 0, 00:17:14.730 "state": "enabled", 00:17:14.730 "thread": "nvmf_tgt_poll_group_000", 00:17:14.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:14.730 "listen_address": { 00:17:14.730 "trtype": "TCP", 00:17:14.730 "adrfam": "IPv4", 00:17:14.730 "traddr": "10.0.0.2", 00:17:14.730 "trsvcid": "4420" 00:17:14.730 }, 00:17:14.730 "peer_address": { 00:17:14.730 "trtype": "TCP", 00:17:14.730 "adrfam": "IPv4", 00:17:14.730 "traddr": "10.0.0.1", 00:17:14.730 "trsvcid": "43078" 00:17:14.730 }, 00:17:14.730 "auth": { 00:17:14.730 "state": "completed", 00:17:14.730 "digest": "sha256", 00:17:14.730 "dhgroup": "ffdhe3072" 00:17:14.730 } 00:17:14.730 } 00:17:14.730 ]' 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.730 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.991 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.991 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.991 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.991 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.991 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.252 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:15.252 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.822 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.083 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.344 00:17:16.344 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.344 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.344 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.603 { 00:17:16.603 "cntlid": 23, 00:17:16.603 "qid": 0, 00:17:16.603 "state": "enabled", 00:17:16.603 "thread": "nvmf_tgt_poll_group_000", 00:17:16.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:16.603 "listen_address": { 00:17:16.603 "trtype": "TCP", 00:17:16.603 "adrfam": "IPv4", 00:17:16.603 "traddr": "10.0.0.2", 00:17:16.603 "trsvcid": "4420" 00:17:16.603 }, 00:17:16.603 "peer_address": { 00:17:16.603 "trtype": "TCP", 00:17:16.603 "adrfam": "IPv4", 00:17:16.603 "traddr": "10.0.0.1", 00:17:16.603 "trsvcid": "43104" 00:17:16.603 }, 00:17:16.603 "auth": { 00:17:16.603 "state": "completed", 00:17:16.603 "digest": "sha256", 00:17:16.603 "dhgroup": "ffdhe3072" 00:17:16.603 } 00:17:16.603 } 00:17:16.603 ]' 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.603 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.863 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.863 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.863 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.863 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.863 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.125 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:17.125 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.696 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.958 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.220 00:17:18.220 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.220 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.220 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.481 { 00:17:18.481 "cntlid": 25, 00:17:18.481 "qid": 0, 00:17:18.481 "state": "enabled", 00:17:18.481 "thread": "nvmf_tgt_poll_group_000", 00:17:18.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:18.481 "listen_address": { 00:17:18.481 "trtype": "TCP", 00:17:18.481 "adrfam": "IPv4", 00:17:18.481 "traddr": "10.0.0.2", 00:17:18.481 "trsvcid": "4420" 00:17:18.481 }, 00:17:18.481 "peer_address": { 00:17:18.481 "trtype": "TCP", 00:17:18.481 "adrfam": "IPv4", 00:17:18.481 "traddr": "10.0.0.1", 00:17:18.481 "trsvcid": "45304" 00:17:18.481 }, 00:17:18.481 "auth": { 00:17:18.481 "state": "completed", 00:17:18.481 "digest": "sha256", 00:17:18.481 "dhgroup": "ffdhe4096" 00:17:18.481 } 00:17:18.481 } 00:17:18.481 ]' 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.481 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.742 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.742 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.742 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.002 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:19.002 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.571 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.832 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.093 00:17:20.093 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.093 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.093 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.354 { 00:17:20.354 "cntlid": 27, 00:17:20.354 "qid": 0, 00:17:20.354 "state": "enabled", 00:17:20.354 "thread": "nvmf_tgt_poll_group_000", 00:17:20.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:20.354 "listen_address": { 00:17:20.354 "trtype": "TCP", 00:17:20.354 "adrfam": "IPv4", 00:17:20.354 "traddr": "10.0.0.2", 00:17:20.354 "trsvcid": "4420" 00:17:20.354 }, 00:17:20.354 "peer_address": { 00:17:20.354 "trtype": "TCP", 00:17:20.354 "adrfam": "IPv4", 00:17:20.354 "traddr": "10.0.0.1", 00:17:20.354 "trsvcid": "45324" 00:17:20.354 }, 00:17:20.354 "auth": { 00:17:20.354 "state": "completed", 00:17:20.354 "digest": "sha256", 00:17:20.354 "dhgroup": "ffdhe4096" 00:17:20.354 } 00:17:20.354 } 00:17:20.354 ]' 00:17:20.354 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.354 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.354 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.616 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.616 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.616 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.616 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.616 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.877 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:20.877 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.447 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.707 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.967 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.227 { 00:17:22.227 "cntlid": 29, 00:17:22.227 "qid": 0, 00:17:22.227 "state": "enabled", 00:17:22.227 "thread": "nvmf_tgt_poll_group_000", 00:17:22.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:22.227 "listen_address": { 00:17:22.227 "trtype": "TCP", 00:17:22.227 "adrfam": "IPv4", 00:17:22.227 "traddr": "10.0.0.2", 00:17:22.227 "trsvcid": "4420" 00:17:22.227 }, 00:17:22.227 "peer_address": { 00:17:22.227 "trtype": "TCP", 00:17:22.227 "adrfam": "IPv4", 00:17:22.227 "traddr": "10.0.0.1", 00:17:22.227 "trsvcid": "45352" 00:17:22.227 }, 00:17:22.227 "auth": { 00:17:22.227 "state": "completed", 00:17:22.227 "digest": "sha256", 00:17:22.227 "dhgroup": "ffdhe4096" 00:17:22.227 } 00:17:22.227 } 00:17:22.227 ]' 00:17:22.227 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.488 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.488 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.488 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.488 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.488 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.488 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.488 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.749 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:22.749 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:23.320 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.321 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.321 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.580 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.150 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.150 { 00:17:24.150 "cntlid": 31, 00:17:24.150 "qid": 0, 00:17:24.150 "state": "enabled", 00:17:24.150 "thread": "nvmf_tgt_poll_group_000", 00:17:24.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:24.150 "listen_address": { 00:17:24.150 "trtype": "TCP", 00:17:24.150 "adrfam": "IPv4", 00:17:24.150 "traddr": "10.0.0.2", 00:17:24.150 "trsvcid": "4420" 00:17:24.150 }, 00:17:24.150 "peer_address": { 00:17:24.150 "trtype": "TCP", 00:17:24.150 "adrfam": "IPv4", 00:17:24.150 "traddr": "10.0.0.1", 00:17:24.150 "trsvcid": "45378" 00:17:24.150 }, 00:17:24.150 "auth": { 00:17:24.150 "state": "completed", 00:17:24.150 "digest": "sha256", 00:17:24.150 "dhgroup": "ffdhe4096" 00:17:24.150 } 00:17:24.150 } 00:17:24.150 ]' 00:17:24.150 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.409 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.669 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:24.669 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.240 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.500 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.759 00:17:25.759 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.759 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.759 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.018 { 00:17:26.018 "cntlid": 33, 00:17:26.018 "qid": 0, 00:17:26.018 "state": "enabled", 00:17:26.018 "thread": "nvmf_tgt_poll_group_000", 00:17:26.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:26.018 "listen_address": { 00:17:26.018 "trtype": "TCP", 00:17:26.018 "adrfam": "IPv4", 00:17:26.018 "traddr": "10.0.0.2", 00:17:26.018 "trsvcid": "4420" 00:17:26.018 }, 00:17:26.018 "peer_address": { 00:17:26.018 "trtype": "TCP", 00:17:26.018 "adrfam": "IPv4", 00:17:26.018 "traddr": "10.0.0.1", 00:17:26.018 "trsvcid": "45408" 00:17:26.018 }, 00:17:26.018 "auth": { 00:17:26.018 "state": "completed", 00:17:26.018 "digest": "sha256", 00:17:26.018 "dhgroup": "ffdhe6144" 00:17:26.018 } 00:17:26.018 } 00:17:26.018 ]' 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.018 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.277 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.277 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.277 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.277 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.536 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:26.536 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:27.104 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.363 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.621 00:17:27.621 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.621 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.621 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.881 { 00:17:27.881 "cntlid": 35, 00:17:27.881 "qid": 0, 00:17:27.881 "state": "enabled", 00:17:27.881 "thread": "nvmf_tgt_poll_group_000", 00:17:27.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:27.881 "listen_address": { 00:17:27.881 "trtype": "TCP", 00:17:27.881 "adrfam": "IPv4", 00:17:27.881 "traddr": "10.0.0.2", 00:17:27.881 "trsvcid": "4420" 00:17:27.881 }, 00:17:27.881 "peer_address": { 00:17:27.881 "trtype": "TCP", 00:17:27.881 "adrfam": "IPv4", 00:17:27.881 "traddr": "10.0.0.1", 00:17:27.881 "trsvcid": "45424" 00:17:27.881 }, 00:17:27.881 "auth": { 00:17:27.881 "state": "completed", 00:17:27.881 "digest": "sha256", 00:17:27.881 "dhgroup": "ffdhe6144" 00:17:27.881 } 00:17:27.881 } 00:17:27.881 ]' 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.881 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.141 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.141 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.141 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.141 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.400 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:28.400 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:28.969 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.228 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.487 00:17:29.487 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.746 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.747 { 00:17:29.747 "cntlid": 37, 00:17:29.747 "qid": 0, 00:17:29.747 "state": "enabled", 00:17:29.747 "thread": "nvmf_tgt_poll_group_000", 00:17:29.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:29.747 "listen_address": { 00:17:29.747 "trtype": "TCP", 00:17:29.747 "adrfam": "IPv4", 00:17:29.747 "traddr": "10.0.0.2", 00:17:29.747 "trsvcid": "4420" 00:17:29.747 }, 00:17:29.747 "peer_address": { 00:17:29.747 "trtype": "TCP", 00:17:29.747 "adrfam": "IPv4", 00:17:29.747 "traddr": "10.0.0.1", 00:17:29.747 "trsvcid": "38334" 00:17:29.747 }, 00:17:29.747 "auth": { 00:17:29.747 "state": "completed", 00:17:29.747 "digest": "sha256", 00:17:29.747 "dhgroup": "ffdhe6144" 00:17:29.747 } 00:17:29.747 } 00:17:29.747 ]' 00:17:29.747 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.006 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.266 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:30.266 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.835 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.095 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.664 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.664 { 00:17:31.664 "cntlid": 39, 00:17:31.664 "qid": 0, 00:17:31.664 "state": "enabled", 00:17:31.664 "thread": "nvmf_tgt_poll_group_000", 00:17:31.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:31.664 "listen_address": { 00:17:31.664 "trtype": "TCP", 00:17:31.664 "adrfam": "IPv4", 00:17:31.664 "traddr": "10.0.0.2", 00:17:31.664 "trsvcid": "4420" 00:17:31.664 }, 00:17:31.664 "peer_address": { 00:17:31.664 "trtype": "TCP", 00:17:31.664 "adrfam": "IPv4", 00:17:31.664 "traddr": "10.0.0.1", 00:17:31.664 "trsvcid": "38364" 00:17:31.664 }, 00:17:31.664 "auth": { 00:17:31.664 "state": "completed", 00:17:31.664 "digest": "sha256", 00:17:31.664 "dhgroup": "ffdhe6144" 00:17:31.664 } 00:17:31.664 } 00:17:31.664 ]' 00:17:31.664 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.924 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.184 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:32.184 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:32.754 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.754 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.755 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.015 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.585 00:17:33.586 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.586 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.586 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.846 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.846 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.846 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.846 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.847 { 00:17:33.847 "cntlid": 41, 00:17:33.847 "qid": 0, 00:17:33.847 "state": "enabled", 00:17:33.847 "thread": "nvmf_tgt_poll_group_000", 00:17:33.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:33.847 "listen_address": { 00:17:33.847 "trtype": "TCP", 00:17:33.847 "adrfam": "IPv4", 00:17:33.847 "traddr": "10.0.0.2", 00:17:33.847 "trsvcid": "4420" 00:17:33.847 }, 00:17:33.847 "peer_address": { 00:17:33.847 "trtype": "TCP", 00:17:33.847 "adrfam": "IPv4", 00:17:33.847 "traddr": "10.0.0.1", 00:17:33.847 "trsvcid": "38376" 00:17:33.847 }, 00:17:33.847 "auth": { 00:17:33.847 "state": "completed", 00:17:33.847 "digest": "sha256", 00:17:33.847 "dhgroup": "ffdhe8192" 00:17:33.847 } 00:17:33.847 } 00:17:33.847 ]' 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.847 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.107 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:34.107 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.048 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.049 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.619 00:17:35.619 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.619 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.619 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.878 { 00:17:35.878 "cntlid": 43, 00:17:35.878 "qid": 0, 00:17:35.878 "state": "enabled", 00:17:35.878 "thread": "nvmf_tgt_poll_group_000", 00:17:35.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:35.878 "listen_address": { 00:17:35.878 "trtype": "TCP", 00:17:35.878 "adrfam": "IPv4", 00:17:35.878 "traddr": "10.0.0.2", 00:17:35.878 "trsvcid": "4420" 00:17:35.878 }, 00:17:35.878 "peer_address": { 00:17:35.878 "trtype": "TCP", 00:17:35.878 "adrfam": "IPv4", 00:17:35.878 "traddr": "10.0.0.1", 00:17:35.878 "trsvcid": "38404" 00:17:35.878 }, 00:17:35.878 "auth": { 00:17:35.878 "state": "completed", 00:17:35.878 "digest": "sha256", 00:17:35.878 "dhgroup": "ffdhe8192" 00:17:35.878 } 00:17:35.878 } 00:17:35.878 ]' 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.878 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.138 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.138 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.138 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.138 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.138 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.398 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:36.398 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:36.968 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.228 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.798 00:17:37.798 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.798 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.798 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.057 { 00:17:38.057 "cntlid": 45, 00:17:38.057 "qid": 0, 00:17:38.057 "state": "enabled", 00:17:38.057 "thread": "nvmf_tgt_poll_group_000", 00:17:38.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:38.057 "listen_address": { 00:17:38.057 "trtype": "TCP", 00:17:38.057 "adrfam": "IPv4", 00:17:38.057 "traddr": "10.0.0.2", 00:17:38.057 "trsvcid": "4420" 00:17:38.057 }, 00:17:38.057 "peer_address": { 00:17:38.057 "trtype": "TCP", 00:17:38.057 "adrfam": "IPv4", 00:17:38.057 "traddr": "10.0.0.1", 00:17:38.057 "trsvcid": "38438" 00:17:38.057 }, 00:17:38.057 "auth": { 00:17:38.057 "state": "completed", 00:17:38.057 "digest": "sha256", 00:17:38.057 "dhgroup": "ffdhe8192" 00:17:38.057 } 00:17:38.057 } 00:17:38.057 ]' 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.057 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.316 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.316 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.316 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.316 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:38.316 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:39.255 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.256 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.515 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.515 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.515 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.516 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.087 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.087 { 00:17:40.087 "cntlid": 47, 00:17:40.087 "qid": 0, 00:17:40.087 "state": "enabled", 00:17:40.087 "thread": "nvmf_tgt_poll_group_000", 00:17:40.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:40.087 "listen_address": { 00:17:40.087 "trtype": "TCP", 00:17:40.087 "adrfam": "IPv4", 00:17:40.087 "traddr": "10.0.0.2", 00:17:40.087 "trsvcid": "4420" 00:17:40.087 }, 00:17:40.087 "peer_address": { 00:17:40.087 "trtype": "TCP", 00:17:40.087 "adrfam": "IPv4", 00:17:40.087 "traddr": "10.0.0.1", 00:17:40.087 "trsvcid": "39908" 00:17:40.087 }, 00:17:40.087 "auth": { 00:17:40.087 "state": "completed", 00:17:40.087 "digest": "sha256", 00:17:40.087 "dhgroup": "ffdhe8192" 00:17:40.087 } 00:17:40.087 } 00:17:40.087 ]' 00:17:40.087 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.348 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.609 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:40.609 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:41.179 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.438 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.697 00:17:41.698 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.698 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.698 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.957 { 00:17:41.957 "cntlid": 49, 00:17:41.957 "qid": 0, 00:17:41.957 "state": "enabled", 00:17:41.957 "thread": "nvmf_tgt_poll_group_000", 00:17:41.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:41.957 "listen_address": { 00:17:41.957 "trtype": "TCP", 00:17:41.957 "adrfam": "IPv4", 00:17:41.957 "traddr": "10.0.0.2", 00:17:41.957 "trsvcid": "4420" 00:17:41.957 }, 00:17:41.957 "peer_address": { 00:17:41.957 "trtype": "TCP", 00:17:41.957 "adrfam": "IPv4", 00:17:41.957 "traddr": "10.0.0.1", 00:17:41.957 "trsvcid": "39938" 00:17:41.957 }, 00:17:41.957 "auth": { 00:17:41.957 "state": "completed", 00:17:41.957 "digest": "sha384", 00:17:41.957 "dhgroup": "null" 00:17:41.957 } 00:17:41.957 } 00:17:41.957 ]' 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.957 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.217 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.217 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.217 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.217 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:42.217 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.157 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.416 00:17:43.417 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.417 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.417 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.676 { 00:17:43.676 "cntlid": 51, 00:17:43.676 "qid": 0, 00:17:43.676 "state": "enabled", 00:17:43.676 "thread": "nvmf_tgt_poll_group_000", 00:17:43.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:43.676 "listen_address": { 00:17:43.676 "trtype": "TCP", 00:17:43.676 "adrfam": "IPv4", 00:17:43.676 "traddr": "10.0.0.2", 00:17:43.676 "trsvcid": "4420" 00:17:43.676 }, 00:17:43.676 "peer_address": { 00:17:43.676 "trtype": "TCP", 00:17:43.676 "adrfam": "IPv4", 00:17:43.676 "traddr": "10.0.0.1", 00:17:43.676 "trsvcid": "39962" 00:17:43.676 }, 00:17:43.676 "auth": { 00:17:43.676 "state": "completed", 00:17:43.676 "digest": "sha384", 00:17:43.676 "dhgroup": "null" 00:17:43.676 } 00:17:43.676 } 00:17:43.676 ]' 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.676 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.937 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.937 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.937 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.937 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.937 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.197 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:44.197 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.765 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.766 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:44.766 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.025 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.284 00:17:45.284 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.284 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.284 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.542 { 00:17:45.542 "cntlid": 53, 00:17:45.542 "qid": 0, 00:17:45.542 "state": "enabled", 00:17:45.542 "thread": "nvmf_tgt_poll_group_000", 00:17:45.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:45.542 "listen_address": { 00:17:45.542 "trtype": "TCP", 00:17:45.542 "adrfam": "IPv4", 00:17:45.542 "traddr": "10.0.0.2", 00:17:45.542 "trsvcid": "4420" 00:17:45.542 }, 00:17:45.542 "peer_address": { 00:17:45.542 "trtype": "TCP", 00:17:45.542 "adrfam": "IPv4", 00:17:45.542 "traddr": "10.0.0.1", 00:17:45.542 "trsvcid": "39978" 00:17:45.542 }, 00:17:45.542 "auth": { 00:17:45.542 "state": "completed", 00:17:45.542 "digest": "sha384", 00:17:45.542 "dhgroup": "null" 00:17:45.542 } 00:17:45.542 } 00:17:45.542 ]' 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.542 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.801 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.801 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.801 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.801 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:45.801 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:46.739 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:46.740 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.999 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.259 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.517 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.517 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.517 { 00:17:47.517 "cntlid": 55, 00:17:47.517 "qid": 0, 00:17:47.517 "state": "enabled", 00:17:47.517 "thread": "nvmf_tgt_poll_group_000", 00:17:47.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:47.517 "listen_address": { 00:17:47.517 "trtype": "TCP", 00:17:47.517 "adrfam": "IPv4", 00:17:47.517 "traddr": "10.0.0.2", 00:17:47.517 "trsvcid": "4420" 00:17:47.517 }, 00:17:47.517 "peer_address": { 00:17:47.517 "trtype": "TCP", 00:17:47.517 "adrfam": "IPv4", 00:17:47.517 "traddr": "10.0.0.1", 00:17:47.517 "trsvcid": "39994" 00:17:47.517 }, 00:17:47.517 "auth": { 00:17:47.517 "state": "completed", 00:17:47.517 "digest": "sha384", 00:17:47.517 "dhgroup": "null" 00:17:47.517 } 00:17:47.517 } 00:17:47.517 ]' 00:17:47.517 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.517 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.517 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.517 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.517 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.517 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.517 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.517 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.775 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:47.776 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:48.345 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.345 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:48.345 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.345 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.345 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.345 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.345 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:48.345 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.605 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.606 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.866 00:17:48.866 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.866 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.866 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.125 { 00:17:49.125 "cntlid": 57, 00:17:49.125 "qid": 0, 00:17:49.125 "state": "enabled", 00:17:49.125 "thread": "nvmf_tgt_poll_group_000", 00:17:49.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:49.125 "listen_address": { 00:17:49.125 "trtype": "TCP", 00:17:49.125 "adrfam": "IPv4", 00:17:49.125 "traddr": "10.0.0.2", 00:17:49.125 "trsvcid": "4420" 00:17:49.125 }, 00:17:49.125 "peer_address": { 00:17:49.125 "trtype": "TCP", 00:17:49.125 "adrfam": "IPv4", 00:17:49.125 "traddr": "10.0.0.1", 00:17:49.125 "trsvcid": "41198" 00:17:49.125 }, 00:17:49.125 "auth": { 00:17:49.125 "state": "completed", 00:17:49.125 "digest": "sha384", 00:17:49.125 "dhgroup": "ffdhe2048" 00:17:49.125 } 00:17:49.125 } 00:17:49.125 ]' 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.125 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.384 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.384 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.384 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.384 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.384 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.643 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:49.644 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.215 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.480 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.798 00:17:50.798 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.798 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.798 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.103 { 00:17:51.103 "cntlid": 59, 00:17:51.103 "qid": 0, 00:17:51.103 "state": "enabled", 00:17:51.103 "thread": "nvmf_tgt_poll_group_000", 00:17:51.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:51.103 "listen_address": { 00:17:51.103 "trtype": "TCP", 00:17:51.103 "adrfam": "IPv4", 00:17:51.103 "traddr": "10.0.0.2", 00:17:51.103 "trsvcid": "4420" 00:17:51.103 }, 00:17:51.103 "peer_address": { 00:17:51.103 "trtype": "TCP", 00:17:51.103 "adrfam": "IPv4", 00:17:51.103 "traddr": "10.0.0.1", 00:17:51.103 "trsvcid": "41216" 00:17:51.103 }, 00:17:51.103 "auth": { 00:17:51.103 "state": "completed", 00:17:51.103 "digest": "sha384", 00:17:51.103 "dhgroup": "ffdhe2048" 00:17:51.103 } 00:17:51.103 } 00:17:51.103 ]' 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.103 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.377 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:51.377 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:51.946 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.207 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:52.207 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.207 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.207 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.208 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.468 00:17:52.468 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.468 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.468 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.728 { 00:17:52.728 "cntlid": 61, 00:17:52.728 "qid": 0, 00:17:52.728 "state": "enabled", 00:17:52.728 "thread": "nvmf_tgt_poll_group_000", 00:17:52.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:52.728 "listen_address": { 00:17:52.728 "trtype": "TCP", 00:17:52.728 "adrfam": "IPv4", 00:17:52.728 "traddr": "10.0.0.2", 00:17:52.728 "trsvcid": "4420" 00:17:52.728 }, 00:17:52.728 "peer_address": { 00:17:52.728 "trtype": "TCP", 00:17:52.728 "adrfam": "IPv4", 00:17:52.728 "traddr": "10.0.0.1", 00:17:52.728 "trsvcid": "41240" 00:17:52.728 }, 00:17:52.728 "auth": { 00:17:52.728 "state": "completed", 00:17:52.728 "digest": "sha384", 00:17:52.728 "dhgroup": "ffdhe2048" 00:17:52.728 } 00:17:52.728 } 00:17:52.728 ]' 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.728 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.988 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.988 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.988 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.988 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.988 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.247 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:53.247 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:17:53.817 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.817 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:53.817 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.817 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.818 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.818 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:53.818 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.078 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.337 00:17:54.337 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.337 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.337 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.597 { 00:17:54.597 "cntlid": 63, 00:17:54.597 "qid": 0, 00:17:54.597 "state": "enabled", 00:17:54.597 "thread": "nvmf_tgt_poll_group_000", 00:17:54.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:54.597 "listen_address": { 00:17:54.597 "trtype": "TCP", 00:17:54.597 "adrfam": "IPv4", 00:17:54.597 "traddr": "10.0.0.2", 00:17:54.597 "trsvcid": "4420" 00:17:54.597 }, 00:17:54.597 "peer_address": { 00:17:54.597 "trtype": "TCP", 00:17:54.597 "adrfam": "IPv4", 00:17:54.597 "traddr": "10.0.0.1", 00:17:54.597 "trsvcid": "41258" 00:17:54.597 }, 00:17:54.597 "auth": { 00:17:54.597 "state": "completed", 00:17:54.597 "digest": "sha384", 00:17:54.597 "dhgroup": "ffdhe2048" 00:17:54.597 } 00:17:54.597 } 00:17:54.597 ]' 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.597 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.857 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.857 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.857 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.857 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.857 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.116 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:55.116 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.687 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.947 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.206 00:17:56.206 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.206 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.206 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.465 { 00:17:56.465 "cntlid": 65, 00:17:56.465 "qid": 0, 00:17:56.465 "state": "enabled", 00:17:56.465 "thread": "nvmf_tgt_poll_group_000", 00:17:56.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:56.465 "listen_address": { 00:17:56.465 "trtype": "TCP", 00:17:56.465 "adrfam": "IPv4", 00:17:56.465 "traddr": "10.0.0.2", 00:17:56.465 "trsvcid": "4420" 00:17:56.465 }, 00:17:56.465 "peer_address": { 00:17:56.465 "trtype": "TCP", 00:17:56.465 "adrfam": "IPv4", 00:17:56.465 "traddr": "10.0.0.1", 00:17:56.465 "trsvcid": "41292" 00:17:56.465 }, 00:17:56.465 "auth": { 00:17:56.465 "state": "completed", 00:17:56.465 "digest": "sha384", 00:17:56.465 "dhgroup": "ffdhe3072" 00:17:56.465 } 00:17:56.465 } 00:17:56.465 ]' 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.465 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.725 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.725 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.725 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.725 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:56.725 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:57.662 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.921 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.180 00:17:58.180 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.180 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.180 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.441 { 00:17:58.441 "cntlid": 67, 00:17:58.441 "qid": 0, 00:17:58.441 "state": "enabled", 00:17:58.441 "thread": "nvmf_tgt_poll_group_000", 00:17:58.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:17:58.441 "listen_address": { 00:17:58.441 "trtype": "TCP", 00:17:58.441 "adrfam": "IPv4", 00:17:58.441 "traddr": "10.0.0.2", 00:17:58.441 "trsvcid": "4420" 00:17:58.441 }, 00:17:58.441 "peer_address": { 00:17:58.441 "trtype": "TCP", 00:17:58.441 "adrfam": "IPv4", 00:17:58.441 "traddr": "10.0.0.1", 00:17:58.441 "trsvcid": "41308" 00:17:58.441 }, 00:17:58.441 "auth": { 00:17:58.441 "state": "completed", 00:17:58.441 "digest": "sha384", 00:17:58.441 "dhgroup": "ffdhe3072" 00:17:58.441 } 00:17:58.441 } 00:17:58.441 ]' 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.441 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.441 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.441 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.441 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.701 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:58.701 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:59.271 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.530 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.789 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.049 { 00:18:00.049 "cntlid": 69, 00:18:00.049 "qid": 0, 00:18:00.049 "state": "enabled", 00:18:00.049 "thread": "nvmf_tgt_poll_group_000", 00:18:00.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:00.049 "listen_address": { 00:18:00.049 "trtype": "TCP", 00:18:00.049 "adrfam": "IPv4", 00:18:00.049 "traddr": "10.0.0.2", 00:18:00.049 "trsvcid": "4420" 00:18:00.049 }, 00:18:00.049 "peer_address": { 00:18:00.049 "trtype": "TCP", 00:18:00.049 "adrfam": "IPv4", 00:18:00.049 "traddr": "10.0.0.1", 00:18:00.049 "trsvcid": "34602" 00:18:00.049 }, 00:18:00.049 "auth": { 00:18:00.049 "state": "completed", 00:18:00.049 "digest": "sha384", 00:18:00.049 "dhgroup": "ffdhe3072" 00:18:00.049 } 00:18:00.049 } 00:18:00.049 ]' 00:18:00.049 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.308 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.567 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:00.567 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.137 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.396 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.397 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.656 00:18:01.656 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.656 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.656 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.916 { 00:18:01.916 "cntlid": 71, 00:18:01.916 "qid": 0, 00:18:01.916 "state": "enabled", 00:18:01.916 "thread": "nvmf_tgt_poll_group_000", 00:18:01.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:01.916 "listen_address": { 00:18:01.916 "trtype": "TCP", 00:18:01.916 "adrfam": "IPv4", 00:18:01.916 "traddr": "10.0.0.2", 00:18:01.916 "trsvcid": "4420" 00:18:01.916 }, 00:18:01.916 "peer_address": { 00:18:01.916 "trtype": "TCP", 00:18:01.916 "adrfam": "IPv4", 00:18:01.916 "traddr": "10.0.0.1", 00:18:01.916 "trsvcid": "34630" 00:18:01.916 }, 00:18:01.916 "auth": { 00:18:01.916 "state": "completed", 00:18:01.916 "digest": "sha384", 00:18:01.916 "dhgroup": "ffdhe3072" 00:18:01.916 } 00:18:01.916 } 00:18:01.916 ]' 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.916 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.176 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.176 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.176 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.176 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.176 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.435 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:02.435 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.004 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.263 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.522 00:18:03.522 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.522 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.522 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.782 { 00:18:03.782 "cntlid": 73, 00:18:03.782 "qid": 0, 00:18:03.782 "state": "enabled", 00:18:03.782 "thread": "nvmf_tgt_poll_group_000", 00:18:03.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:03.782 "listen_address": { 00:18:03.782 "trtype": "TCP", 00:18:03.782 "adrfam": "IPv4", 00:18:03.782 "traddr": "10.0.0.2", 00:18:03.782 "trsvcid": "4420" 00:18:03.782 }, 00:18:03.782 "peer_address": { 00:18:03.782 "trtype": "TCP", 00:18:03.782 "adrfam": "IPv4", 00:18:03.782 "traddr": "10.0.0.1", 00:18:03.782 "trsvcid": "34652" 00:18:03.782 }, 00:18:03.782 "auth": { 00:18:03.782 "state": "completed", 00:18:03.782 "digest": "sha384", 00:18:03.782 "dhgroup": "ffdhe4096" 00:18:03.782 } 00:18:03.782 } 00:18:03.782 ]' 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.782 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.041 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:04.041 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:04.981 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.982 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.241 00:18:05.241 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.241 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.241 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.499 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.500 { 00:18:05.500 "cntlid": 75, 00:18:05.500 "qid": 0, 00:18:05.500 "state": "enabled", 00:18:05.500 "thread": "nvmf_tgt_poll_group_000", 00:18:05.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:05.500 "listen_address": { 00:18:05.500 "trtype": "TCP", 00:18:05.500 "adrfam": "IPv4", 00:18:05.500 "traddr": "10.0.0.2", 00:18:05.500 "trsvcid": "4420" 00:18:05.500 }, 00:18:05.500 "peer_address": { 00:18:05.500 "trtype": "TCP", 00:18:05.500 "adrfam": "IPv4", 00:18:05.500 "traddr": "10.0.0.1", 00:18:05.500 "trsvcid": "34690" 00:18:05.500 }, 00:18:05.500 "auth": { 00:18:05.500 "state": "completed", 00:18:05.500 "digest": "sha384", 00:18:05.500 "dhgroup": "ffdhe4096" 00:18:05.500 } 00:18:05.500 } 00:18:05.500 ]' 00:18:05.500 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.500 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.758 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:05.758 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:06.326 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.586 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.154 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.154 { 00:18:07.154 "cntlid": 77, 00:18:07.154 "qid": 0, 00:18:07.154 "state": "enabled", 00:18:07.154 "thread": "nvmf_tgt_poll_group_000", 00:18:07.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:07.154 "listen_address": { 00:18:07.154 "trtype": "TCP", 00:18:07.154 "adrfam": "IPv4", 00:18:07.154 "traddr": "10.0.0.2", 00:18:07.154 "trsvcid": "4420" 00:18:07.154 }, 00:18:07.154 "peer_address": { 00:18:07.154 "trtype": "TCP", 00:18:07.154 "adrfam": "IPv4", 00:18:07.154 "traddr": "10.0.0.1", 00:18:07.154 "trsvcid": "34712" 00:18:07.154 }, 00:18:07.154 "auth": { 00:18:07.154 "state": "completed", 00:18:07.154 "digest": "sha384", 00:18:07.154 "dhgroup": "ffdhe4096" 00:18:07.154 } 00:18:07.154 } 00:18:07.154 ]' 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.154 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.414 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.414 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.414 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.414 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:07.414 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.353 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.353 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.612 00:18:08.612 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.612 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.612 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.872 { 00:18:08.872 "cntlid": 79, 00:18:08.872 "qid": 0, 00:18:08.872 "state": "enabled", 00:18:08.872 "thread": "nvmf_tgt_poll_group_000", 00:18:08.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:08.872 "listen_address": { 00:18:08.872 "trtype": "TCP", 00:18:08.872 "adrfam": "IPv4", 00:18:08.872 "traddr": "10.0.0.2", 00:18:08.872 "trsvcid": "4420" 00:18:08.872 }, 00:18:08.872 "peer_address": { 00:18:08.872 "trtype": "TCP", 00:18:08.872 "adrfam": "IPv4", 00:18:08.872 "traddr": "10.0.0.1", 00:18:08.872 "trsvcid": "33804" 00:18:08.872 }, 00:18:08.872 "auth": { 00:18:08.872 "state": "completed", 00:18:08.872 "digest": "sha384", 00:18:08.872 "dhgroup": "ffdhe4096" 00:18:08.872 } 00:18:08.872 } 00:18:08.872 ]' 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.872 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.133 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.133 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.133 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.133 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.133 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.393 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:09.393 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.962 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.221 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.222 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.222 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.482 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.741 { 00:18:10.741 "cntlid": 81, 00:18:10.741 "qid": 0, 00:18:10.741 "state": "enabled", 00:18:10.741 "thread": "nvmf_tgt_poll_group_000", 00:18:10.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:10.741 "listen_address": { 00:18:10.741 "trtype": "TCP", 00:18:10.741 "adrfam": "IPv4", 00:18:10.741 "traddr": "10.0.0.2", 00:18:10.741 "trsvcid": "4420" 00:18:10.741 }, 00:18:10.741 "peer_address": { 00:18:10.741 "trtype": "TCP", 00:18:10.741 "adrfam": "IPv4", 00:18:10.741 "traddr": "10.0.0.1", 00:18:10.741 "trsvcid": "33828" 00:18:10.741 }, 00:18:10.741 "auth": { 00:18:10.741 "state": "completed", 00:18:10.741 "digest": "sha384", 00:18:10.741 "dhgroup": "ffdhe6144" 00:18:10.741 } 00:18:10.741 } 00:18:10.741 ]' 00:18:10.741 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.001 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.261 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:11.261 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:11.830 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:12.090 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:12.090 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.090 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.091 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.659 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.659 { 00:18:12.659 "cntlid": 83, 00:18:12.659 "qid": 0, 00:18:12.659 "state": "enabled", 00:18:12.659 "thread": "nvmf_tgt_poll_group_000", 00:18:12.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:12.659 "listen_address": { 00:18:12.659 "trtype": "TCP", 00:18:12.659 "adrfam": "IPv4", 00:18:12.659 "traddr": "10.0.0.2", 00:18:12.659 "trsvcid": "4420" 00:18:12.659 }, 00:18:12.659 "peer_address": { 00:18:12.659 "trtype": "TCP", 00:18:12.659 "adrfam": "IPv4", 00:18:12.659 "traddr": "10.0.0.1", 00:18:12.659 "trsvcid": "33856" 00:18:12.659 }, 00:18:12.659 "auth": { 00:18:12.659 "state": "completed", 00:18:12.659 "digest": "sha384", 00:18:12.659 "dhgroup": "ffdhe6144" 00:18:12.659 } 00:18:12.659 } 00:18:12.659 ]' 00:18:12.659 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.918 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.178 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:13.178 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.748 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.008 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.268 00:18:14.268 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.268 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.268 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.529 { 00:18:14.529 "cntlid": 85, 00:18:14.529 "qid": 0, 00:18:14.529 "state": "enabled", 00:18:14.529 "thread": "nvmf_tgt_poll_group_000", 00:18:14.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:14.529 "listen_address": { 00:18:14.529 "trtype": "TCP", 00:18:14.529 "adrfam": "IPv4", 00:18:14.529 "traddr": "10.0.0.2", 00:18:14.529 "trsvcid": "4420" 00:18:14.529 }, 00:18:14.529 "peer_address": { 00:18:14.529 "trtype": "TCP", 00:18:14.529 "adrfam": "IPv4", 00:18:14.529 "traddr": "10.0.0.1", 00:18:14.529 "trsvcid": "33886" 00:18:14.529 }, 00:18:14.529 "auth": { 00:18:14.529 "state": "completed", 00:18:14.529 "digest": "sha384", 00:18:14.529 "dhgroup": "ffdhe6144" 00:18:14.529 } 00:18:14.529 } 00:18:14.529 ]' 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.529 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.789 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.789 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.789 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.789 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:14.789 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.729 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.990 00:18:15.990 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.990 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.990 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.250 { 00:18:16.250 "cntlid": 87, 00:18:16.250 "qid": 0, 00:18:16.250 "state": "enabled", 00:18:16.250 "thread": "nvmf_tgt_poll_group_000", 00:18:16.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:16.250 "listen_address": { 00:18:16.250 "trtype": "TCP", 00:18:16.250 "adrfam": "IPv4", 00:18:16.250 "traddr": "10.0.0.2", 00:18:16.250 "trsvcid": "4420" 00:18:16.250 }, 00:18:16.250 "peer_address": { 00:18:16.250 "trtype": "TCP", 00:18:16.250 "adrfam": "IPv4", 00:18:16.250 "traddr": "10.0.0.1", 00:18:16.250 "trsvcid": "33910" 00:18:16.250 }, 00:18:16.250 "auth": { 00:18:16.250 "state": "completed", 00:18:16.250 "digest": "sha384", 00:18:16.250 "dhgroup": "ffdhe6144" 00:18:16.250 } 00:18:16.250 } 00:18:16.250 ]' 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.250 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.510 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.510 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.510 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.510 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.510 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.771 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:16.771 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.341 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.601 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.170 00:18:18.170 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.170 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.170 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.432 { 00:18:18.432 "cntlid": 89, 00:18:18.432 "qid": 0, 00:18:18.432 "state": "enabled", 00:18:18.432 "thread": "nvmf_tgt_poll_group_000", 00:18:18.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:18.432 "listen_address": { 00:18:18.432 "trtype": "TCP", 00:18:18.432 "adrfam": "IPv4", 00:18:18.432 "traddr": "10.0.0.2", 00:18:18.432 "trsvcid": "4420" 00:18:18.432 }, 00:18:18.432 "peer_address": { 00:18:18.432 "trtype": "TCP", 00:18:18.432 "adrfam": "IPv4", 00:18:18.432 "traddr": "10.0.0.1", 00:18:18.432 "trsvcid": "33936" 00:18:18.432 }, 00:18:18.432 "auth": { 00:18:18.432 "state": "completed", 00:18:18.432 "digest": "sha384", 00:18:18.432 "dhgroup": "ffdhe8192" 00:18:18.432 } 00:18:18.432 } 00:18:18.432 ]' 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.432 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.432 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.432 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.432 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.692 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:18.692 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.631 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.631 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.199 00:18:20.199 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.199 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.199 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.457 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.457 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.457 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.457 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.457 { 00:18:20.457 "cntlid": 91, 00:18:20.457 "qid": 0, 00:18:20.457 "state": "enabled", 00:18:20.457 "thread": "nvmf_tgt_poll_group_000", 00:18:20.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:20.457 "listen_address": { 00:18:20.457 "trtype": "TCP", 00:18:20.457 "adrfam": "IPv4", 00:18:20.457 "traddr": "10.0.0.2", 00:18:20.457 "trsvcid": "4420" 00:18:20.457 }, 00:18:20.457 "peer_address": { 00:18:20.457 "trtype": "TCP", 00:18:20.457 "adrfam": "IPv4", 00:18:20.457 "traddr": "10.0.0.1", 00:18:20.457 "trsvcid": "32974" 00:18:20.457 }, 00:18:20.457 "auth": { 00:18:20.457 "state": "completed", 00:18:20.457 "digest": "sha384", 00:18:20.457 "dhgroup": "ffdhe8192" 00:18:20.457 } 00:18:20.457 } 00:18:20.457 ]' 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.457 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.716 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:20.716 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.653 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.227 00:18:22.227 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.227 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.227 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.486 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.486 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.487 { 00:18:22.487 "cntlid": 93, 00:18:22.487 "qid": 0, 00:18:22.487 "state": "enabled", 00:18:22.487 "thread": "nvmf_tgt_poll_group_000", 00:18:22.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:22.487 "listen_address": { 00:18:22.487 "trtype": "TCP", 00:18:22.487 "adrfam": "IPv4", 00:18:22.487 "traddr": "10.0.0.2", 00:18:22.487 "trsvcid": "4420" 00:18:22.487 }, 00:18:22.487 "peer_address": { 00:18:22.487 "trtype": "TCP", 00:18:22.487 "adrfam": "IPv4", 00:18:22.487 "traddr": "10.0.0.1", 00:18:22.487 "trsvcid": "32992" 00:18:22.487 }, 00:18:22.487 "auth": { 00:18:22.487 "state": "completed", 00:18:22.487 "digest": "sha384", 00:18:22.487 "dhgroup": "ffdhe8192" 00:18:22.487 } 00:18:22.487 } 00:18:22.487 ]' 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.487 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.746 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.746 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.746 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.746 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:22.746 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.682 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.942 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.511 00:18:24.511 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.511 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.511 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.511 { 00:18:24.511 "cntlid": 95, 00:18:24.511 "qid": 0, 00:18:24.511 "state": "enabled", 00:18:24.511 "thread": "nvmf_tgt_poll_group_000", 00:18:24.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:24.511 "listen_address": { 00:18:24.511 "trtype": "TCP", 00:18:24.511 "adrfam": "IPv4", 00:18:24.511 "traddr": "10.0.0.2", 00:18:24.511 "trsvcid": "4420" 00:18:24.511 }, 00:18:24.511 "peer_address": { 00:18:24.511 "trtype": "TCP", 00:18:24.511 "adrfam": "IPv4", 00:18:24.511 "traddr": "10.0.0.1", 00:18:24.511 "trsvcid": "33016" 00:18:24.511 }, 00:18:24.511 "auth": { 00:18:24.511 "state": "completed", 00:18:24.511 "digest": "sha384", 00:18:24.511 "dhgroup": "ffdhe8192" 00:18:24.511 } 00:18:24.511 } 00:18:24.511 ]' 00:18:24.511 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.770 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.029 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:25.029 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.600 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.860 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.120 00:18:26.120 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.120 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.120 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.380 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.380 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.380 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.380 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.380 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.380 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.380 { 00:18:26.380 "cntlid": 97, 00:18:26.380 "qid": 0, 00:18:26.380 "state": "enabled", 00:18:26.380 "thread": "nvmf_tgt_poll_group_000", 00:18:26.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:26.380 "listen_address": { 00:18:26.380 "trtype": "TCP", 00:18:26.380 "adrfam": "IPv4", 00:18:26.380 "traddr": "10.0.0.2", 00:18:26.380 "trsvcid": "4420" 00:18:26.380 }, 00:18:26.380 "peer_address": { 00:18:26.380 "trtype": "TCP", 00:18:26.380 "adrfam": "IPv4", 00:18:26.380 "traddr": "10.0.0.1", 00:18:26.380 "trsvcid": "33034" 00:18:26.380 }, 00:18:26.380 "auth": { 00:18:26.380 "state": "completed", 00:18:26.380 "digest": "sha512", 00:18:26.380 "dhgroup": "null" 00:18:26.380 } 00:18:26.380 } 00:18:26.380 ]' 00:18:26.380 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.380 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.380 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.640 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:26.640 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.640 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.640 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.640 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.900 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:26.900 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:27.469 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.730 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.017 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.017 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.304 { 00:18:28.304 "cntlid": 99, 00:18:28.304 "qid": 0, 00:18:28.304 "state": "enabled", 00:18:28.304 "thread": "nvmf_tgt_poll_group_000", 00:18:28.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:28.304 "listen_address": { 00:18:28.304 "trtype": "TCP", 00:18:28.304 "adrfam": "IPv4", 00:18:28.304 "traddr": "10.0.0.2", 00:18:28.304 "trsvcid": "4420" 00:18:28.304 }, 00:18:28.304 "peer_address": { 00:18:28.304 "trtype": "TCP", 00:18:28.304 "adrfam": "IPv4", 00:18:28.304 "traddr": "10.0.0.1", 00:18:28.304 "trsvcid": "33064" 00:18:28.304 }, 00:18:28.304 "auth": { 00:18:28.304 "state": "completed", 00:18:28.304 "digest": "sha512", 00:18:28.304 "dhgroup": "null" 00:18:28.304 } 00:18:28.304 } 00:18:28.304 ]' 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.304 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.576 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.576 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.576 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.576 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.576 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.866 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:28.866 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:29.438 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.438 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:29.438 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.438 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.438 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.438 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.438 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.438 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.699 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.959 00:18:29.959 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.959 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.959 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.220 { 00:18:30.220 "cntlid": 101, 00:18:30.220 "qid": 0, 00:18:30.220 "state": "enabled", 00:18:30.220 "thread": "nvmf_tgt_poll_group_000", 00:18:30.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:30.220 "listen_address": { 00:18:30.220 "trtype": "TCP", 00:18:30.220 "adrfam": "IPv4", 00:18:30.220 "traddr": "10.0.0.2", 00:18:30.220 "trsvcid": "4420" 00:18:30.220 }, 00:18:30.220 "peer_address": { 00:18:30.220 "trtype": "TCP", 00:18:30.220 "adrfam": "IPv4", 00:18:30.220 "traddr": "10.0.0.1", 00:18:30.220 "trsvcid": "45034" 00:18:30.220 }, 00:18:30.220 "auth": { 00:18:30.220 "state": "completed", 00:18:30.220 "digest": "sha512", 00:18:30.220 "dhgroup": "null" 00:18:30.220 } 00:18:30.220 } 00:18:30.220 ]' 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.220 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.482 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:30.482 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.426 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.426 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.685 00:18:31.685 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.685 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.685 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.945 { 00:18:31.945 "cntlid": 103, 00:18:31.945 "qid": 0, 00:18:31.945 "state": "enabled", 00:18:31.945 "thread": "nvmf_tgt_poll_group_000", 00:18:31.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:31.945 "listen_address": { 00:18:31.945 "trtype": "TCP", 00:18:31.945 "adrfam": "IPv4", 00:18:31.945 "traddr": "10.0.0.2", 00:18:31.945 "trsvcid": "4420" 00:18:31.945 }, 00:18:31.945 "peer_address": { 00:18:31.945 "trtype": "TCP", 00:18:31.945 "adrfam": "IPv4", 00:18:31.945 "traddr": "10.0.0.1", 00:18:31.945 "trsvcid": "45066" 00:18:31.945 }, 00:18:31.945 "auth": { 00:18:31.945 "state": "completed", 00:18:31.945 "digest": "sha512", 00:18:31.945 "dhgroup": "null" 00:18:31.945 } 00:18:31.945 } 00:18:31.945 ]' 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:31.945 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.204 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.204 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.204 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.204 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:32.204 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.142 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.402 00:18:33.402 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.402 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.402 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.681 { 00:18:33.681 "cntlid": 105, 00:18:33.681 "qid": 0, 00:18:33.681 "state": "enabled", 00:18:33.681 "thread": "nvmf_tgt_poll_group_000", 00:18:33.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:33.681 "listen_address": { 00:18:33.681 "trtype": "TCP", 00:18:33.681 "adrfam": "IPv4", 00:18:33.681 "traddr": "10.0.0.2", 00:18:33.681 "trsvcid": "4420" 00:18:33.681 }, 00:18:33.681 "peer_address": { 00:18:33.681 "trtype": "TCP", 00:18:33.681 "adrfam": "IPv4", 00:18:33.681 "traddr": "10.0.0.1", 00:18:33.681 "trsvcid": "45102" 00:18:33.681 }, 00:18:33.681 "auth": { 00:18:33.681 "state": "completed", 00:18:33.681 "digest": "sha512", 00:18:33.681 "dhgroup": "ffdhe2048" 00:18:33.681 } 00:18:33.681 } 00:18:33.681 ]' 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.681 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.941 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:33.941 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.511 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.770 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:34.770 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.770 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.771 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.031 00:18:35.031 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.031 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.032 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.292 { 00:18:35.292 "cntlid": 107, 00:18:35.292 "qid": 0, 00:18:35.292 "state": "enabled", 00:18:35.292 "thread": "nvmf_tgt_poll_group_000", 00:18:35.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:35.292 "listen_address": { 00:18:35.292 "trtype": "TCP", 00:18:35.292 "adrfam": "IPv4", 00:18:35.292 "traddr": "10.0.0.2", 00:18:35.292 "trsvcid": "4420" 00:18:35.292 }, 00:18:35.292 "peer_address": { 00:18:35.292 "trtype": "TCP", 00:18:35.292 "adrfam": "IPv4", 00:18:35.292 "traddr": "10.0.0.1", 00:18:35.292 "trsvcid": "45130" 00:18:35.292 }, 00:18:35.292 "auth": { 00:18:35.292 "state": "completed", 00:18:35.292 "digest": "sha512", 00:18:35.292 "dhgroup": "ffdhe2048" 00:18:35.292 } 00:18:35.292 } 00:18:35.292 ]' 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.292 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.551 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:35.551 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.490 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:36.491 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.491 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.750 00:18:36.750 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.750 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.750 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.010 { 00:18:37.010 "cntlid": 109, 00:18:37.010 "qid": 0, 00:18:37.010 "state": "enabled", 00:18:37.010 "thread": "nvmf_tgt_poll_group_000", 00:18:37.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:37.010 "listen_address": { 00:18:37.010 "trtype": "TCP", 00:18:37.010 "adrfam": "IPv4", 00:18:37.010 "traddr": "10.0.0.2", 00:18:37.010 "trsvcid": "4420" 00:18:37.010 }, 00:18:37.010 "peer_address": { 00:18:37.010 "trtype": "TCP", 00:18:37.010 "adrfam": "IPv4", 00:18:37.010 "traddr": "10.0.0.1", 00:18:37.010 "trsvcid": "45154" 00:18:37.010 }, 00:18:37.010 "auth": { 00:18:37.010 "state": "completed", 00:18:37.010 "digest": "sha512", 00:18:37.010 "dhgroup": "ffdhe2048" 00:18:37.010 } 00:18:37.010 } 00:18:37.010 ]' 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.010 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.269 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:37.269 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.209 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.471 00:18:38.471 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.471 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.471 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.731 { 00:18:38.731 "cntlid": 111, 00:18:38.731 "qid": 0, 00:18:38.731 "state": "enabled", 00:18:38.731 "thread": "nvmf_tgt_poll_group_000", 00:18:38.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:38.731 "listen_address": { 00:18:38.731 "trtype": "TCP", 00:18:38.731 "adrfam": "IPv4", 00:18:38.731 "traddr": "10.0.0.2", 00:18:38.731 "trsvcid": "4420" 00:18:38.731 }, 00:18:38.731 "peer_address": { 00:18:38.731 "trtype": "TCP", 00:18:38.731 "adrfam": "IPv4", 00:18:38.731 "traddr": "10.0.0.1", 00:18:38.731 "trsvcid": "56574" 00:18:38.731 }, 00:18:38.731 "auth": { 00:18:38.731 "state": "completed", 00:18:38.731 "digest": "sha512", 00:18:38.731 "dhgroup": "ffdhe2048" 00:18:38.731 } 00:18:38.731 } 00:18:38.731 ]' 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.731 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.991 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:38.991 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.932 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.192 00:18:40.192 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.192 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.192 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.452 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.452 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.452 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.453 { 00:18:40.453 "cntlid": 113, 00:18:40.453 "qid": 0, 00:18:40.453 "state": "enabled", 00:18:40.453 "thread": "nvmf_tgt_poll_group_000", 00:18:40.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:40.453 "listen_address": { 00:18:40.453 "trtype": "TCP", 00:18:40.453 "adrfam": "IPv4", 00:18:40.453 "traddr": "10.0.0.2", 00:18:40.453 "trsvcid": "4420" 00:18:40.453 }, 00:18:40.453 "peer_address": { 00:18:40.453 "trtype": "TCP", 00:18:40.453 "adrfam": "IPv4", 00:18:40.453 "traddr": "10.0.0.1", 00:18:40.453 "trsvcid": "56600" 00:18:40.453 }, 00:18:40.453 "auth": { 00:18:40.453 "state": "completed", 00:18:40.453 "digest": "sha512", 00:18:40.453 "dhgroup": "ffdhe3072" 00:18:40.453 } 00:18:40.453 } 00:18:40.453 ]' 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.453 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.714 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.714 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.714 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.714 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.714 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.974 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:40.974 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:41.545 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.546 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.806 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.066 00:18:42.066 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.066 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.066 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.326 { 00:18:42.326 "cntlid": 115, 00:18:42.326 "qid": 0, 00:18:42.326 "state": "enabled", 00:18:42.326 "thread": "nvmf_tgt_poll_group_000", 00:18:42.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:42.326 "listen_address": { 00:18:42.326 "trtype": "TCP", 00:18:42.326 "adrfam": "IPv4", 00:18:42.326 "traddr": "10.0.0.2", 00:18:42.326 "trsvcid": "4420" 00:18:42.326 }, 00:18:42.326 "peer_address": { 00:18:42.326 "trtype": "TCP", 00:18:42.326 "adrfam": "IPv4", 00:18:42.326 "traddr": "10.0.0.1", 00:18:42.326 "trsvcid": "56626" 00:18:42.326 }, 00:18:42.326 "auth": { 00:18:42.326 "state": "completed", 00:18:42.326 "digest": "sha512", 00:18:42.326 "dhgroup": "ffdhe3072" 00:18:42.326 } 00:18:42.326 } 00:18:42.326 ]' 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.326 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.586 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:42.586 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.526 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.526 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.786 00:18:43.786 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.786 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.786 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.047 { 00:18:44.047 "cntlid": 117, 00:18:44.047 "qid": 0, 00:18:44.047 "state": "enabled", 00:18:44.047 "thread": "nvmf_tgt_poll_group_000", 00:18:44.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:44.047 "listen_address": { 00:18:44.047 "trtype": "TCP", 00:18:44.047 "adrfam": "IPv4", 00:18:44.047 "traddr": "10.0.0.2", 00:18:44.047 "trsvcid": "4420" 00:18:44.047 }, 00:18:44.047 "peer_address": { 00:18:44.047 "trtype": "TCP", 00:18:44.047 "adrfam": "IPv4", 00:18:44.047 "traddr": "10.0.0.1", 00:18:44.047 "trsvcid": "56660" 00:18:44.047 }, 00:18:44.047 "auth": { 00:18:44.047 "state": "completed", 00:18:44.047 "digest": "sha512", 00:18:44.047 "dhgroup": "ffdhe3072" 00:18:44.047 } 00:18:44.047 } 00:18:44.047 ]' 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.047 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.307 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:44.307 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.877 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.138 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.399 00:18:45.399 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.399 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.399 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.659 { 00:18:45.659 "cntlid": 119, 00:18:45.659 "qid": 0, 00:18:45.659 "state": "enabled", 00:18:45.659 "thread": "nvmf_tgt_poll_group_000", 00:18:45.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:45.659 "listen_address": { 00:18:45.659 "trtype": "TCP", 00:18:45.659 "adrfam": "IPv4", 00:18:45.659 "traddr": "10.0.0.2", 00:18:45.659 "trsvcid": "4420" 00:18:45.659 }, 00:18:45.659 "peer_address": { 00:18:45.659 "trtype": "TCP", 00:18:45.659 "adrfam": "IPv4", 00:18:45.659 "traddr": "10.0.0.1", 00:18:45.659 "trsvcid": "56694" 00:18:45.659 }, 00:18:45.659 "auth": { 00:18:45.659 "state": "completed", 00:18:45.659 "digest": "sha512", 00:18:45.659 "dhgroup": "ffdhe3072" 00:18:45.659 } 00:18:45.659 } 00:18:45.659 ]' 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.659 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.919 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:45.919 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:46.489 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.489 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:46.489 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.489 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.750 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.751 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.010 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.271 { 00:18:47.271 "cntlid": 121, 00:18:47.271 "qid": 0, 00:18:47.271 "state": "enabled", 00:18:47.271 "thread": "nvmf_tgt_poll_group_000", 00:18:47.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:47.271 "listen_address": { 00:18:47.271 "trtype": "TCP", 00:18:47.271 "adrfam": "IPv4", 00:18:47.271 "traddr": "10.0.0.2", 00:18:47.271 "trsvcid": "4420" 00:18:47.271 }, 00:18:47.271 "peer_address": { 00:18:47.271 "trtype": "TCP", 00:18:47.271 "adrfam": "IPv4", 00:18:47.271 "traddr": "10.0.0.1", 00:18:47.271 "trsvcid": "56730" 00:18:47.271 }, 00:18:47.271 "auth": { 00:18:47.271 "state": "completed", 00:18:47.271 "digest": "sha512", 00:18:47.271 "dhgroup": "ffdhe4096" 00:18:47.271 } 00:18:47.271 } 00:18:47.271 ]' 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.271 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.535 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.535 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.535 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.535 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:47.535 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.479 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.739 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.999 00:18:48.999 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.999 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.999 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.259 { 00:18:49.259 "cntlid": 123, 00:18:49.259 "qid": 0, 00:18:49.259 "state": "enabled", 00:18:49.259 "thread": "nvmf_tgt_poll_group_000", 00:18:49.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:49.259 "listen_address": { 00:18:49.259 "trtype": "TCP", 00:18:49.259 "adrfam": "IPv4", 00:18:49.259 "traddr": "10.0.0.2", 00:18:49.259 "trsvcid": "4420" 00:18:49.259 }, 00:18:49.259 "peer_address": { 00:18:49.259 "trtype": "TCP", 00:18:49.259 "adrfam": "IPv4", 00:18:49.259 "traddr": "10.0.0.1", 00:18:49.259 "trsvcid": "45580" 00:18:49.259 }, 00:18:49.259 "auth": { 00:18:49.259 "state": "completed", 00:18:49.259 "digest": "sha512", 00:18:49.259 "dhgroup": "ffdhe4096" 00:18:49.259 } 00:18:49.259 } 00:18:49.259 ]' 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.259 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.520 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:49.520 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.091 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.092 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.352 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.613 00:18:50.613 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.613 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.613 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.873 { 00:18:50.873 "cntlid": 125, 00:18:50.873 "qid": 0, 00:18:50.873 "state": "enabled", 00:18:50.873 "thread": "nvmf_tgt_poll_group_000", 00:18:50.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:50.873 "listen_address": { 00:18:50.873 "trtype": "TCP", 00:18:50.873 "adrfam": "IPv4", 00:18:50.873 "traddr": "10.0.0.2", 00:18:50.873 "trsvcid": "4420" 00:18:50.873 }, 00:18:50.873 "peer_address": { 00:18:50.873 "trtype": "TCP", 00:18:50.873 "adrfam": "IPv4", 00:18:50.873 "traddr": "10.0.0.1", 00:18:50.873 "trsvcid": "45608" 00:18:50.873 }, 00:18:50.873 "auth": { 00:18:50.873 "state": "completed", 00:18:50.873 "digest": "sha512", 00:18:50.873 "dhgroup": "ffdhe4096" 00:18:50.873 } 00:18:50.873 } 00:18:50.873 ]' 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.873 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.133 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.134 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.134 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.134 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:51.134 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.072 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.641 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.641 { 00:18:52.641 "cntlid": 127, 00:18:52.641 "qid": 0, 00:18:52.641 "state": "enabled", 00:18:52.641 "thread": "nvmf_tgt_poll_group_000", 00:18:52.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:52.641 "listen_address": { 00:18:52.641 "trtype": "TCP", 00:18:52.641 "adrfam": "IPv4", 00:18:52.641 "traddr": "10.0.0.2", 00:18:52.641 "trsvcid": "4420" 00:18:52.641 }, 00:18:52.641 "peer_address": { 00:18:52.641 "trtype": "TCP", 00:18:52.641 "adrfam": "IPv4", 00:18:52.641 "traddr": "10.0.0.1", 00:18:52.641 "trsvcid": "45632" 00:18:52.641 }, 00:18:52.641 "auth": { 00:18:52.641 "state": "completed", 00:18:52.641 "digest": "sha512", 00:18:52.641 "dhgroup": "ffdhe4096" 00:18:52.641 } 00:18:52.641 } 00:18:52.641 ]' 00:18:52.641 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.901 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.160 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:53.161 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.731 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.991 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.992 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.562 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.562 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.562 { 00:18:54.562 "cntlid": 129, 00:18:54.562 "qid": 0, 00:18:54.562 "state": "enabled", 00:18:54.562 "thread": "nvmf_tgt_poll_group_000", 00:18:54.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:54.562 "listen_address": { 00:18:54.562 "trtype": "TCP", 00:18:54.562 "adrfam": "IPv4", 00:18:54.562 "traddr": "10.0.0.2", 00:18:54.562 "trsvcid": "4420" 00:18:54.562 }, 00:18:54.562 "peer_address": { 00:18:54.562 "trtype": "TCP", 00:18:54.562 "adrfam": "IPv4", 00:18:54.562 "traddr": "10.0.0.1", 00:18:54.562 "trsvcid": "45656" 00:18:54.562 }, 00:18:54.562 "auth": { 00:18:54.562 "state": "completed", 00:18:54.562 "digest": "sha512", 00:18:54.562 "dhgroup": "ffdhe6144" 00:18:54.562 } 00:18:54.562 } 00:18:54.562 ]' 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.822 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.082 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:55.082 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:18:55.651 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.911 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.479 00:18:56.479 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.480 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.480 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.740 { 00:18:56.740 "cntlid": 131, 00:18:56.740 "qid": 0, 00:18:56.740 "state": "enabled", 00:18:56.740 "thread": "nvmf_tgt_poll_group_000", 00:18:56.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:56.740 "listen_address": { 00:18:56.740 "trtype": "TCP", 00:18:56.740 "adrfam": "IPv4", 00:18:56.740 "traddr": "10.0.0.2", 00:18:56.740 "trsvcid": "4420" 00:18:56.740 }, 00:18:56.740 "peer_address": { 00:18:56.740 "trtype": "TCP", 00:18:56.740 "adrfam": "IPv4", 00:18:56.740 "traddr": "10.0.0.1", 00:18:56.740 "trsvcid": "45672" 00:18:56.740 }, 00:18:56.740 "auth": { 00:18:56.740 "state": "completed", 00:18:56.740 "digest": "sha512", 00:18:56.740 "dhgroup": "ffdhe6144" 00:18:56.740 } 00:18:56.740 } 00:18:56.740 ]' 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.740 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.999 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:56.999 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.938 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.939 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.939 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.939 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.939 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.508 00:18:58.508 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.508 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.508 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.508 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.508 { 00:18:58.508 "cntlid": 133, 00:18:58.508 "qid": 0, 00:18:58.508 "state": "enabled", 00:18:58.508 "thread": "nvmf_tgt_poll_group_000", 00:18:58.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:18:58.508 "listen_address": { 00:18:58.508 "trtype": "TCP", 00:18:58.508 "adrfam": "IPv4", 00:18:58.508 "traddr": "10.0.0.2", 00:18:58.508 "trsvcid": "4420" 00:18:58.508 }, 00:18:58.508 "peer_address": { 00:18:58.508 "trtype": "TCP", 00:18:58.508 "adrfam": "IPv4", 00:18:58.508 "traddr": "10.0.0.1", 00:18:58.508 "trsvcid": "35750" 00:18:58.508 }, 00:18:58.508 "auth": { 00:18:58.508 "state": "completed", 00:18:58.508 "digest": "sha512", 00:18:58.508 "dhgroup": "ffdhe6144" 00:18:58.508 } 00:18:58.508 } 00:18:58.508 ]' 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.768 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.028 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:59.028 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.598 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.858 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.427 00:19:00.427 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.427 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.427 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.688 { 00:19:00.688 "cntlid": 135, 00:19:00.688 "qid": 0, 00:19:00.688 "state": "enabled", 00:19:00.688 "thread": "nvmf_tgt_poll_group_000", 00:19:00.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:00.688 "listen_address": { 00:19:00.688 "trtype": "TCP", 00:19:00.688 "adrfam": "IPv4", 00:19:00.688 "traddr": "10.0.0.2", 00:19:00.688 "trsvcid": "4420" 00:19:00.688 }, 00:19:00.688 "peer_address": { 00:19:00.688 "trtype": "TCP", 00:19:00.688 "adrfam": "IPv4", 00:19:00.688 "traddr": "10.0.0.1", 00:19:00.688 "trsvcid": "35788" 00:19:00.688 }, 00:19:00.688 "auth": { 00:19:00.688 "state": "completed", 00:19:00.688 "digest": "sha512", 00:19:00.688 "dhgroup": "ffdhe6144" 00:19:00.688 } 00:19:00.688 } 00:19:00.688 ]' 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.688 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.949 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:00.949 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.519 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.780 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.349 00:19:02.349 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.349 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.349 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.609 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.609 { 00:19:02.609 "cntlid": 137, 00:19:02.610 "qid": 0, 00:19:02.610 "state": "enabled", 00:19:02.610 "thread": "nvmf_tgt_poll_group_000", 00:19:02.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:02.610 "listen_address": { 00:19:02.610 "trtype": "TCP", 00:19:02.610 "adrfam": "IPv4", 00:19:02.610 "traddr": "10.0.0.2", 00:19:02.610 "trsvcid": "4420" 00:19:02.610 }, 00:19:02.610 "peer_address": { 00:19:02.610 "trtype": "TCP", 00:19:02.610 "adrfam": "IPv4", 00:19:02.610 "traddr": "10.0.0.1", 00:19:02.610 "trsvcid": "35808" 00:19:02.610 }, 00:19:02.610 "auth": { 00:19:02.610 "state": "completed", 00:19:02.610 "digest": "sha512", 00:19:02.610 "dhgroup": "ffdhe8192" 00:19:02.610 } 00:19:02.610 } 00:19:02.610 ]' 00:19:02.610 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.610 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.610 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.610 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.610 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.870 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.870 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.870 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.870 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:19:02.870 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.810 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.378 00:19:04.378 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.378 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.378 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.638 { 00:19:04.638 "cntlid": 139, 00:19:04.638 "qid": 0, 00:19:04.638 "state": "enabled", 00:19:04.638 "thread": "nvmf_tgt_poll_group_000", 00:19:04.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:04.638 "listen_address": { 00:19:04.638 "trtype": "TCP", 00:19:04.638 "adrfam": "IPv4", 00:19:04.638 "traddr": "10.0.0.2", 00:19:04.638 "trsvcid": "4420" 00:19:04.638 }, 00:19:04.638 "peer_address": { 00:19:04.638 "trtype": "TCP", 00:19:04.638 "adrfam": "IPv4", 00:19:04.638 "traddr": "10.0.0.1", 00:19:04.638 "trsvcid": "35844" 00:19:04.638 }, 00:19:04.638 "auth": { 00:19:04.638 "state": "completed", 00:19:04.638 "digest": "sha512", 00:19:04.638 "dhgroup": "ffdhe8192" 00:19:04.638 } 00:19:04.638 } 00:19:04.638 ]' 00:19:04.638 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.898 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.157 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:19:05.157 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: --dhchap-ctrl-secret DHHC-1:02:NWU5NzEwNzdmZjhhM2UxZTkzY2NmYjI4MmFhYWMzODY1NWY4YjVlMDc3YTAzNzg1F3h3ZA==: 00:19:05.724 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.724 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:05.724 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.725 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.725 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.725 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.983 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.551 00:19:06.551 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.551 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.551 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.810 { 00:19:06.810 "cntlid": 141, 00:19:06.810 "qid": 0, 00:19:06.810 "state": "enabled", 00:19:06.810 "thread": "nvmf_tgt_poll_group_000", 00:19:06.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:06.810 "listen_address": { 00:19:06.810 "trtype": "TCP", 00:19:06.810 "adrfam": "IPv4", 00:19:06.810 "traddr": "10.0.0.2", 00:19:06.810 "trsvcid": "4420" 00:19:06.810 }, 00:19:06.810 "peer_address": { 00:19:06.810 "trtype": "TCP", 00:19:06.810 "adrfam": "IPv4", 00:19:06.810 "traddr": "10.0.0.1", 00:19:06.810 "trsvcid": "35866" 00:19:06.810 }, 00:19:06.810 "auth": { 00:19:06.810 "state": "completed", 00:19:06.810 "digest": "sha512", 00:19:06.810 "dhgroup": "ffdhe8192" 00:19:06.810 } 00:19:06.810 } 00:19:06.810 ]' 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.810 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.070 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:19:07.070 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:01:ODRjZTIzNTVjMTA4NWNmOTQyODMxNTUyYWUxZjYwM2YeLdbp: 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.639 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.938 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:07.938 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.938 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.939 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.604 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.604 { 00:19:08.604 "cntlid": 143, 00:19:08.604 "qid": 0, 00:19:08.604 "state": "enabled", 00:19:08.604 "thread": "nvmf_tgt_poll_group_000", 00:19:08.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:08.604 "listen_address": { 00:19:08.604 "trtype": "TCP", 00:19:08.604 "adrfam": "IPv4", 00:19:08.604 "traddr": "10.0.0.2", 00:19:08.604 "trsvcid": "4420" 00:19:08.604 }, 00:19:08.604 "peer_address": { 00:19:08.604 "trtype": "TCP", 00:19:08.604 "adrfam": "IPv4", 00:19:08.604 "traddr": "10.0.0.1", 00:19:08.604 "trsvcid": "53826" 00:19:08.604 }, 00:19:08.604 "auth": { 00:19:08.604 "state": "completed", 00:19:08.604 "digest": "sha512", 00:19:08.604 "dhgroup": "ffdhe8192" 00:19:08.604 } 00:19:08.604 } 00:19:08.604 ]' 00:19:08.604 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.864 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.124 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:09.124 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.695 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.955 16:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.524 00:19:10.524 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.524 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.524 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.785 { 00:19:10.785 "cntlid": 145, 00:19:10.785 "qid": 0, 00:19:10.785 "state": "enabled", 00:19:10.785 "thread": "nvmf_tgt_poll_group_000", 00:19:10.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:10.785 "listen_address": { 00:19:10.785 "trtype": "TCP", 00:19:10.785 "adrfam": "IPv4", 00:19:10.785 "traddr": "10.0.0.2", 00:19:10.785 "trsvcid": "4420" 00:19:10.785 }, 00:19:10.785 "peer_address": { 00:19:10.785 "trtype": "TCP", 00:19:10.785 "adrfam": "IPv4", 00:19:10.785 "traddr": "10.0.0.1", 00:19:10.785 "trsvcid": "53852" 00:19:10.785 }, 00:19:10.785 "auth": { 00:19:10.785 "state": "completed", 00:19:10.785 "digest": "sha512", 00:19:10.785 "dhgroup": "ffdhe8192" 00:19:10.785 } 00:19:10.785 } 00:19:10.785 ]' 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.785 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.045 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.045 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.045 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.045 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:19:11.045 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:00:YTZlZWIzOWE0NTg1MjlhODA5NTQwNmE5NGIyNDFhZmIxZGI5ZTJjNTViYzBiODI04+fbkA==: --dhchap-ctrl-secret DHHC-1:03:MDllZjVlZDE3NzBkY2FmYjc4ZTNlNjM2NmJkMTliZDFkZjBlNTBhYzVhYjYyOGY1NzBiNmFhZDg3NTViNGE2OAxT1D4=: 00:19:11.616 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:11.876 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:12.447 request: 00:19:12.447 { 00:19:12.447 "name": "nvme0", 00:19:12.447 "trtype": "tcp", 00:19:12.447 "traddr": "10.0.0.2", 00:19:12.447 "adrfam": "ipv4", 00:19:12.447 "trsvcid": "4420", 00:19:12.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:12.448 "prchk_reftag": false, 00:19:12.448 "prchk_guard": false, 00:19:12.448 "hdgst": false, 00:19:12.448 "ddgst": false, 00:19:12.448 "dhchap_key": "key2", 00:19:12.448 "allow_unrecognized_csi": false, 00:19:12.448 "method": "bdev_nvme_attach_controller", 00:19:12.448 "req_id": 1 00:19:12.448 } 00:19:12.448 Got JSON-RPC error response 00:19:12.448 response: 00:19:12.448 { 00:19:12.448 "code": -5, 00:19:12.448 "message": "Input/output error" 00:19:12.448 } 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.448 16:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:13.019 request: 00:19:13.019 { 00:19:13.019 "name": "nvme0", 00:19:13.019 "trtype": "tcp", 00:19:13.019 "traddr": "10.0.0.2", 00:19:13.019 "adrfam": "ipv4", 00:19:13.019 "trsvcid": "4420", 00:19:13.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:13.019 "prchk_reftag": false, 00:19:13.019 "prchk_guard": false, 00:19:13.019 "hdgst": false, 00:19:13.019 "ddgst": false, 00:19:13.019 "dhchap_key": "key1", 00:19:13.019 "dhchap_ctrlr_key": "ckey2", 00:19:13.019 "allow_unrecognized_csi": false, 00:19:13.019 "method": "bdev_nvme_attach_controller", 00:19:13.019 "req_id": 1 00:19:13.019 } 00:19:13.019 Got JSON-RPC error response 00:19:13.019 response: 00:19:13.019 { 00:19:13.019 "code": -5, 00:19:13.019 "message": "Input/output error" 00:19:13.019 } 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.019 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.278 request: 00:19:13.279 { 00:19:13.279 "name": "nvme0", 00:19:13.279 "trtype": "tcp", 00:19:13.279 "traddr": "10.0.0.2", 00:19:13.279 "adrfam": "ipv4", 00:19:13.279 "trsvcid": "4420", 00:19:13.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:13.279 "prchk_reftag": false, 00:19:13.279 "prchk_guard": false, 00:19:13.279 "hdgst": false, 00:19:13.279 "ddgst": false, 00:19:13.279 "dhchap_key": "key1", 00:19:13.279 "dhchap_ctrlr_key": "ckey1", 00:19:13.279 "allow_unrecognized_csi": false, 00:19:13.279 "method": "bdev_nvme_attach_controller", 00:19:13.279 "req_id": 1 00:19:13.279 } 00:19:13.279 Got JSON-RPC error response 00:19:13.279 response: 00:19:13.279 { 00:19:13.279 "code": -5, 00:19:13.279 "message": "Input/output error" 00:19:13.279 } 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2668618 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2668618 ']' 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2668618 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.539 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2668618 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2668618' 00:19:13.539 killing process with pid 2668618 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2668618 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2668618 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2694276 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2694276 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2694276 ']' 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.480 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.480 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:14.480 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:14.480 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.480 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.740 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.740 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:14.740 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2694276 00:19:14.740 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2694276 ']' 00:19:14.740 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.741 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 null0 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iSo 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.eCC ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCC 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.73r 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.G1m ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G1m 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.4Sk 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Eu3 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Eu3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5AM 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.002 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.945 nvme0n1 00:19:15.945 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.945 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.945 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.206 { 00:19:16.206 "cntlid": 1, 00:19:16.206 "qid": 0, 00:19:16.206 "state": "enabled", 00:19:16.206 "thread": "nvmf_tgt_poll_group_000", 00:19:16.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:16.206 "listen_address": { 00:19:16.206 "trtype": "TCP", 00:19:16.206 "adrfam": "IPv4", 00:19:16.206 "traddr": "10.0.0.2", 00:19:16.206 "trsvcid": "4420" 00:19:16.206 }, 00:19:16.206 "peer_address": { 00:19:16.206 "trtype": "TCP", 00:19:16.206 "adrfam": "IPv4", 00:19:16.206 "traddr": "10.0.0.1", 00:19:16.206 "trsvcid": "53912" 00:19:16.206 }, 00:19:16.206 "auth": { 00:19:16.206 "state": "completed", 00:19:16.206 "digest": "sha512", 00:19:16.206 "dhgroup": "ffdhe8192" 00:19:16.206 } 00:19:16.206 } 00:19:16.206 ]' 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.206 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.466 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:16.466 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key3 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:17.406 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.406 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.666 request: 00:19:17.666 { 00:19:17.666 "name": "nvme0", 00:19:17.666 "trtype": "tcp", 00:19:17.666 "traddr": "10.0.0.2", 00:19:17.666 "adrfam": "ipv4", 00:19:17.666 "trsvcid": "4420", 00:19:17.666 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:17.666 "prchk_reftag": false, 00:19:17.666 "prchk_guard": false, 00:19:17.666 "hdgst": false, 00:19:17.666 "ddgst": false, 00:19:17.666 "dhchap_key": "key3", 00:19:17.666 "allow_unrecognized_csi": false, 00:19:17.666 "method": "bdev_nvme_attach_controller", 00:19:17.666 "req_id": 1 00:19:17.666 } 00:19:17.666 Got JSON-RPC error response 00:19:17.666 response: 00:19:17.666 { 00:19:17.666 "code": -5, 00:19:17.666 "message": "Input/output error" 00:19:17.666 } 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.666 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.927 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.187 request: 00:19:18.187 { 00:19:18.187 "name": "nvme0", 00:19:18.187 "trtype": "tcp", 00:19:18.187 "traddr": "10.0.0.2", 00:19:18.187 "adrfam": "ipv4", 00:19:18.187 "trsvcid": "4420", 00:19:18.187 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:18.187 "prchk_reftag": false, 00:19:18.187 "prchk_guard": false, 00:19:18.187 "hdgst": false, 00:19:18.187 "ddgst": false, 00:19:18.187 "dhchap_key": "key3", 00:19:18.187 "allow_unrecognized_csi": false, 00:19:18.187 "method": "bdev_nvme_attach_controller", 00:19:18.187 "req_id": 1 00:19:18.187 } 00:19:18.187 Got JSON-RPC error response 00:19:18.187 response: 00:19:18.187 { 00:19:18.187 "code": -5, 00:19:18.187 "message": "Input/output error" 00:19:18.187 } 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.187 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.447 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.448 16:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.708 request: 00:19:18.708 { 00:19:18.708 "name": "nvme0", 00:19:18.708 "trtype": "tcp", 00:19:18.708 "traddr": "10.0.0.2", 00:19:18.708 "adrfam": "ipv4", 00:19:18.708 "trsvcid": "4420", 00:19:18.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:18.708 "prchk_reftag": false, 00:19:18.708 "prchk_guard": false, 00:19:18.708 "hdgst": false, 00:19:18.708 "ddgst": false, 00:19:18.708 "dhchap_key": "key0", 00:19:18.708 "dhchap_ctrlr_key": "key1", 00:19:18.708 "allow_unrecognized_csi": false, 00:19:18.708 "method": "bdev_nvme_attach_controller", 00:19:18.708 "req_id": 1 00:19:18.708 } 00:19:18.708 Got JSON-RPC error response 00:19:18.708 response: 00:19:18.708 { 00:19:18.708 "code": -5, 00:19:18.708 "message": "Input/output error" 00:19:18.708 } 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:18.708 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:18.968 nvme0n1 00:19:18.968 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:18.968 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:18.968 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.229 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.229 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.229 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:19.489 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:20.432 nvme0n1 00:19:20.432 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:20.432 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:20.432 16:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.432 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.432 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:20.433 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.433 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:20.693 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid 80f8a7aa-1216-ec11-9bc7-a4bf018b228a -l 0 --dhchap-secret DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: --dhchap-ctrl-secret DHHC-1:03:YWM5MGIxOTc2YzA5MzkzNDliZjJmYThiNDM2OGVkYjZiYTU2ODcxN2U0OGY3MjBlNGE1MDRhMmQ1Njk0ZjQzMRfkacU=: 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:21.263 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.264 16:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:21.525 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:22.096 request: 00:19:22.096 { 00:19:22.096 "name": "nvme0", 00:19:22.096 "trtype": "tcp", 00:19:22.096 "traddr": "10.0.0.2", 00:19:22.096 "adrfam": "ipv4", 00:19:22.096 "trsvcid": "4420", 00:19:22.096 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:22.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a", 00:19:22.096 "prchk_reftag": false, 00:19:22.096 "prchk_guard": false, 00:19:22.096 "hdgst": false, 00:19:22.096 "ddgst": false, 00:19:22.096 "dhchap_key": "key1", 00:19:22.096 "allow_unrecognized_csi": false, 00:19:22.096 "method": "bdev_nvme_attach_controller", 00:19:22.096 "req_id": 1 00:19:22.096 } 00:19:22.096 Got JSON-RPC error response 00:19:22.096 response: 00:19:22.096 { 00:19:22.096 "code": -5, 00:19:22.096 "message": "Input/output error" 00:19:22.096 } 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:22.096 16:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.036 nvme0n1 00:19:23.036 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:23.036 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:23.036 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.297 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.297 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.297 16:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:23.558 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:23.818 nvme0n1 00:19:23.818 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:23.818 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:23.818 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.078 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.079 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.079 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: '' 2s 00:19:24.338 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: ]] 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGU4NGM3ZTVmYTlhZWNmMTQ3ZTM4NmRkNjkzMjIzMTaFR2uS: 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:24.339 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: 2s 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: ]] 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWM4ZjkyMmEyNDk2N2I3ZmJhMWM2Y2FhYmVjMDY2MmFmOTcwMmFjMjdkYzdiMjk0HGSUqA==: 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:26.251 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:28.162 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:28.162 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:19:28.162 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:28.162 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:28.422 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:29.363 nvme0n1 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.363 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:29.933 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:30.193 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:30.193 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:30.193 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:30.454 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:31.025 request: 00:19:31.025 { 00:19:31.025 "name": "nvme0", 00:19:31.025 "dhchap_key": "key1", 00:19:31.025 "dhchap_ctrlr_key": "key3", 00:19:31.025 "method": "bdev_nvme_set_keys", 00:19:31.025 "req_id": 1 00:19:31.025 } 00:19:31.025 Got JSON-RPC error response 00:19:31.025 response: 00:19:31.025 { 00:19:31.025 "code": -13, 00:19:31.025 "message": "Permission denied" 00:19:31.025 } 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:31.025 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.285 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:31.285 16:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:32.226 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:32.226 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:32.226 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.485 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:32.486 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:32.486 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.486 16:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.486 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.486 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.486 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.486 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:33.424 nvme0n1 00:19:33.424 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.424 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.424 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.424 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.424 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:33.425 16:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:33.994 request: 00:19:33.994 { 00:19:33.994 "name": "nvme0", 00:19:33.994 "dhchap_key": "key2", 00:19:33.994 "dhchap_ctrlr_key": "key0", 00:19:33.994 "method": "bdev_nvme_set_keys", 00:19:33.994 "req_id": 1 00:19:33.994 } 00:19:33.994 Got JSON-RPC error response 00:19:33.994 response: 00:19:33.994 { 00:19:33.994 "code": -13, 00:19:33.994 "message": "Permission denied" 00:19:33.994 } 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:33.994 16:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2668812 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2668812 ']' 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2668812 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2668812 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2668812' 00:19:35.467 killing process with pid 2668812 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2668812 00:19:35.467 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2668812 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.727 rmmod nvme_tcp 00:19:35.727 rmmod nvme_fabrics 00:19:35.727 rmmod nvme_keyring 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 2694276 ']' 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 2694276 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2694276 ']' 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2694276 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2694276 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2694276' 00:19:35.727 killing process with pid 2694276 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2694276 00:19:35.727 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2694276 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.987 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iSo /tmp/spdk.key-sha256.73r /tmp/spdk.key-sha384.4Sk /tmp/spdk.key-sha512.5AM /tmp/spdk.key-sha512.eCC /tmp/spdk.key-sha384.G1m /tmp/spdk.key-sha256.Eu3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:37.894 00:19:37.894 real 2m56.678s 00:19:37.894 user 6m42.998s 00:19:37.894 sys 0m25.304s 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.894 ************************************ 00:19:37.894 END TEST nvmf_auth_target 00:19:37.894 ************************************ 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:37.894 16:44:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.156 ************************************ 00:19:38.156 START TEST nvmf_bdevio_no_huge 00:19:38.156 ************************************ 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:38.156 * Looking for test storage... 00:19:38.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.156 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:38.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.157 --rc genhtml_branch_coverage=1 00:19:38.157 --rc genhtml_function_coverage=1 00:19:38.157 --rc genhtml_legend=1 00:19:38.157 --rc geninfo_all_blocks=1 00:19:38.157 --rc geninfo_unexecuted_blocks=1 00:19:38.157 00:19:38.157 ' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:38.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.157 --rc genhtml_branch_coverage=1 00:19:38.157 --rc genhtml_function_coverage=1 00:19:38.157 --rc genhtml_legend=1 00:19:38.157 --rc geninfo_all_blocks=1 00:19:38.157 --rc geninfo_unexecuted_blocks=1 00:19:38.157 00:19:38.157 ' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:38.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.157 --rc genhtml_branch_coverage=1 00:19:38.157 --rc genhtml_function_coverage=1 00:19:38.157 --rc genhtml_legend=1 00:19:38.157 --rc geninfo_all_blocks=1 00:19:38.157 --rc geninfo_unexecuted_blocks=1 00:19:38.157 00:19:38.157 ' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:38.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.157 --rc genhtml_branch_coverage=1 00:19:38.157 --rc genhtml_function_coverage=1 00:19:38.157 --rc genhtml_legend=1 00:19:38.157 --rc geninfo_all_blocks=1 00:19:38.157 --rc geninfo_unexecuted_blocks=1 00:19:38.157 00:19:38.157 ' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.157 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.418 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:38.418 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:38.418 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:38.418 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.009 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:45.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:45.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:45.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:45.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:45.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:19:45.010 00:19:45.010 --- 10.0.0.2 ping statistics --- 00:19:45.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.010 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:19:45.010 00:19:45.010 --- 10.0.0.1 ping statistics --- 00:19:45.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.010 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:45.010 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.011 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=2701968 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 2701968 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2701968 ']' 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.271 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:45.271 [2024-10-01 16:44:36.759251] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:19:45.271 [2024-10-01 16:44:36.759321] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:45.271 [2024-10-01 16:44:36.831881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.271 [2024-10-01 16:44:36.909595] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.271 [2024-10-01 16:44:36.909629] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.271 [2024-10-01 16:44:36.909635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.271 [2024-10-01 16:44:36.909640] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.271 [2024-10-01 16:44:36.909646] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.271 [2024-10-01 16:44:36.909752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.271 [2024-10-01 16:44:36.909905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:19:45.271 [2024-10-01 16:44:36.910028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.271 [2024-10-01 16:44:36.910028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 [2024-10-01 16:44:37.688911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 Malloc0 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 [2024-10-01 16:44:37.725188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:46.213 { 00:19:46.213 "params": { 00:19:46.213 "name": "Nvme$subsystem", 00:19:46.213 "trtype": "$TEST_TRANSPORT", 00:19:46.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.213 "adrfam": "ipv4", 00:19:46.213 "trsvcid": "$NVMF_PORT", 00:19:46.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.213 "hdgst": ${hdgst:-false}, 00:19:46.213 "ddgst": ${ddgst:-false} 00:19:46.213 }, 00:19:46.213 "method": "bdev_nvme_attach_controller" 00:19:46.213 } 00:19:46.213 EOF 00:19:46.213 )") 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:46.213 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:46.213 "params": { 00:19:46.213 "name": "Nvme1", 00:19:46.213 "trtype": "tcp", 00:19:46.213 "traddr": "10.0.0.2", 00:19:46.213 "adrfam": "ipv4", 00:19:46.213 "trsvcid": "4420", 00:19:46.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.213 "hdgst": false, 00:19:46.213 "ddgst": false 00:19:46.213 }, 00:19:46.213 "method": "bdev_nvme_attach_controller" 00:19:46.213 }' 00:19:46.213 [2024-10-01 16:44:37.778376] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:19:46.213 [2024-10-01 16:44:37.778426] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2702288 ] 00:19:46.213 [2024-10-01 16:44:37.856364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:46.473 [2024-10-01 16:44:37.944798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.473 [2024-10-01 16:44:37.944915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.473 [2024-10-01 16:44:37.944918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.473 I/O targets: 00:19:46.473 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:46.473 00:19:46.473 00:19:46.473 CUnit - A unit testing framework for C - Version 2.1-3 00:19:46.473 http://cunit.sourceforge.net/ 00:19:46.473 00:19:46.473 00:19:46.473 Suite: bdevio tests on: Nvme1n1 00:19:46.734 Test: blockdev write read block ...passed 00:19:46.734 Test: blockdev write zeroes read block ...passed 00:19:46.734 Test: blockdev write zeroes read no split ...passed 00:19:46.734 Test: blockdev write zeroes read split ...passed 00:19:46.734 Test: blockdev write zeroes read split partial ...passed 00:19:46.734 Test: blockdev reset ...[2024-10-01 16:44:38.309069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.734 [2024-10-01 16:44:38.309134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822250 (9): Bad file descriptor 00:19:46.734 [2024-10-01 16:44:38.321376] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:46.734 passed 00:19:46.734 Test: blockdev write read 8 blocks ...passed 00:19:46.734 Test: blockdev write read size > 128k ...passed 00:19:46.734 Test: blockdev write read invalid size ...passed 00:19:46.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:46.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:46.734 Test: blockdev write read max offset ...passed 00:19:46.995 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:46.995 Test: blockdev writev readv 8 blocks ...passed 00:19:46.995 Test: blockdev writev readv 30 x 1block ...passed 00:19:46.995 Test: blockdev writev readv block ...passed 00:19:46.995 Test: blockdev writev readv size > 128k ...passed 00:19:46.995 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:46.995 Test: blockdev comparev and writev ...[2024-10-01 16:44:38.584755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.584783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.584799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.584805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.585254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.585264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.585274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.585280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.585750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.585759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.585769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.586180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.586189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.586200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:46.995 [2024-10-01 16:44:38.586206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:46.995 passed 00:19:46.995 Test: blockdev nvme passthru rw ...passed 00:19:46.995 Test: blockdev nvme passthru vendor specific ...[2024-10-01 16:44:38.669729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:46.995 [2024-10-01 16:44:38.669741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.670061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:46.995 [2024-10-01 16:44:38.670069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.670412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:46.995 [2024-10-01 16:44:38.670420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:46.995 [2024-10-01 16:44:38.670747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:46.995 [2024-10-01 16:44:38.670757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:46.995 passed 00:19:47.255 Test: blockdev nvme admin passthru ...passed 00:19:47.255 Test: blockdev copy ...passed 00:19:47.255 00:19:47.255 Run Summary: Type Total Ran Passed Failed Inactive 00:19:47.255 suites 1 1 n/a 0 0 00:19:47.255 tests 23 23 23 0 0 00:19:47.255 asserts 152 152 152 0 n/a 00:19:47.255 00:19:47.255 Elapsed time = 1.237 seconds 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.514 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.514 rmmod nvme_tcp 00:19:47.514 rmmod nvme_fabrics 00:19:47.514 rmmod nvme_keyring 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 2701968 ']' 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 2701968 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2701968 ']' 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2701968 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2701968 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2701968' 00:19:47.514 killing process with pid 2701968 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2701968 00:19:47.514 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2701968 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.774 16:44:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:50.316 00:19:50.316 real 0m11.850s 00:19:50.316 user 0m13.694s 00:19:50.316 sys 0m6.209s 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:50.316 ************************************ 00:19:50.316 END TEST nvmf_bdevio_no_huge 00:19:50.316 ************************************ 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.316 ************************************ 00:19:50.316 START TEST nvmf_tls 00:19:50.316 ************************************ 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:50.316 * Looking for test storage... 00:19:50.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:50.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.316 --rc genhtml_branch_coverage=1 00:19:50.316 --rc genhtml_function_coverage=1 00:19:50.316 --rc genhtml_legend=1 00:19:50.316 --rc geninfo_all_blocks=1 00:19:50.316 --rc geninfo_unexecuted_blocks=1 00:19:50.316 00:19:50.316 ' 00:19:50.316 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:50.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.316 --rc genhtml_branch_coverage=1 00:19:50.316 --rc genhtml_function_coverage=1 00:19:50.317 --rc genhtml_legend=1 00:19:50.317 --rc geninfo_all_blocks=1 00:19:50.317 --rc geninfo_unexecuted_blocks=1 00:19:50.317 00:19:50.317 ' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:50.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.317 --rc genhtml_branch_coverage=1 00:19:50.317 --rc genhtml_function_coverage=1 00:19:50.317 --rc genhtml_legend=1 00:19:50.317 --rc geninfo_all_blocks=1 00:19:50.317 --rc geninfo_unexecuted_blocks=1 00:19:50.317 00:19:50.317 ' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:50.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.317 --rc genhtml_branch_coverage=1 00:19:50.317 --rc genhtml_function_coverage=1 00:19:50.317 --rc genhtml_legend=1 00:19:50.317 --rc geninfo_all_blocks=1 00:19:50.317 --rc geninfo_unexecuted_blocks=1 00:19:50.317 00:19:50.317 ' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.317 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:58.448 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:58.448 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:58.448 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:58.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:58.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:19:58.449 00:19:58.449 --- 10.0.0.2 ping statistics --- 00:19:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.449 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:19:58.449 00:19:58.449 --- 10.0.0.1 ping statistics --- 00:19:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.449 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2706505 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2706505 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2706505 ']' 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.449 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.449 [2024-10-01 16:44:49.046985] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:19:58.449 [2024-10-01 16:44:49.047035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.449 [2024-10-01 16:44:49.105505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.449 [2024-10-01 16:44:49.158768] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.449 [2024-10-01 16:44:49.158802] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.449 [2024-10-01 16:44:49.158809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.449 [2024-10-01 16:44:49.158814] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.449 [2024-10-01 16:44:49.158818] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.449 [2024-10-01 16:44:49.158840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:58.449 true 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.449 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:58.449 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:58.449 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:58.449 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:58.709 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.709 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:58.969 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:58.969 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:58.969 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.970 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:59.230 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:59.230 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:59.230 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:59.230 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.230 16:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:59.489 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:59.489 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:59.489 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:59.749 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.749 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.7a9N5QsfNx 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.YAo7LE1ZKx 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7a9N5QsfNx 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.YAo7LE1ZKx 00:20:00.009 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:00.269 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:00.529 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.7a9N5QsfNx 00:20:00.529 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7a9N5QsfNx 00:20:00.529 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.789 [2024-10-01 16:44:52.303141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.789 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.049 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.049 [2024-10-01 16:44:52.688080] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.049 [2024-10-01 16:44:52.688275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.049 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.310 malloc0 00:20:01.310 16:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.569 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7a9N5QsfNx 00:20:01.828 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.088 16:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7a9N5QsfNx 00:20:12.210 Initializing NVMe Controllers 00:20:12.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.210 Initialization complete. Launching workers. 00:20:12.210 ======================================================== 00:20:12.210 Latency(us) 00:20:12.210 Device Information : IOPS MiB/s Average min max 00:20:12.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18215.83 71.16 3513.54 1046.30 4332.41 00:20:12.210 ======================================================== 00:20:12.210 Total : 18215.83 71.16 3513.54 1046.30 4332.41 00:20:12.210 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7a9N5QsfNx 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7a9N5QsfNx 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2709106 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2709106 /var/tmp/bdevperf.sock 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2709106 ']' 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.210 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.210 [2024-10-01 16:45:03.752751] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:12.210 [2024-10-01 16:45:03.752806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709106 ] 00:20:12.210 [2024-10-01 16:45:03.803529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.210 [2024-10-01 16:45:03.856463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.502 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.502 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:12.502 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7a9N5QsfNx 00:20:12.502 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.761 [2024-10-01 16:45:04.309214] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.761 TLSTESTn1 00:20:12.761 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:13.021 Running I/O for 10 seconds... 00:20:23.123 2174.00 IOPS, 8.49 MiB/s 1813.50 IOPS, 7.08 MiB/s 1686.33 IOPS, 6.59 MiB/s 2319.25 IOPS, 9.06 MiB/s 2712.00 IOPS, 10.59 MiB/s 2740.67 IOPS, 10.71 MiB/s 2573.57 IOPS, 10.05 MiB/s 2461.12 IOPS, 9.61 MiB/s 2504.33 IOPS, 9.78 MiB/s 2411.10 IOPS, 9.42 MiB/s 00:20:23.123 Latency(us) 00:20:23.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.123 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.123 Verification LBA range: start 0x0 length 0x2000 00:20:23.123 TLSTESTn1 : 10.10 2400.41 9.38 0.00 0.00 53104.90 5217.67 111310.38 00:20:23.123 =================================================================================================================== 00:20:23.123 Total : 2400.41 9.38 0.00 0.00 53104.90 5217.67 111310.38 00:20:23.123 { 00:20:23.123 "results": [ 00:20:23.123 { 00:20:23.123 "job": "TLSTESTn1", 00:20:23.123 "core_mask": "0x4", 00:20:23.123 "workload": "verify", 00:20:23.123 "status": "finished", 00:20:23.123 "verify_range": { 00:20:23.123 "start": 0, 00:20:23.123 "length": 8192 00:20:23.123 }, 00:20:23.123 "queue_depth": 128, 00:20:23.123 "io_size": 4096, 00:20:23.123 "runtime": 10.097861, 00:20:23.123 "iops": 2400.409354020619, 00:20:23.123 "mibps": 9.376599039143043, 00:20:23.123 "io_failed": 0, 00:20:23.123 "io_timeout": 0, 00:20:23.123 "avg_latency_us": 53104.899780201646, 00:20:23.123 "min_latency_us": 5217.673846153846, 00:20:23.123 "max_latency_us": 111310.37538461538 00:20:23.123 } 00:20:23.123 ], 00:20:23.123 "core_count": 1 00:20:23.123 } 00:20:23.123 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.123 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2709106 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2709106 ']' 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2709106 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2709106 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2709106' 00:20:23.124 killing process with pid 2709106 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2709106 00:20:23.124 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.124 00:20:23.124 Latency(us) 00:20:23.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.124 =================================================================================================================== 00:20:23.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.124 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2709106 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAo7LE1ZKx 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAo7LE1ZKx 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YAo7LE1ZKx 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YAo7LE1ZKx 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2711386 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2711386 /var/tmp/bdevperf.sock 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2711386 ']' 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.384 16:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.384 [2024-10-01 16:45:14.896294] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:23.384 [2024-10-01 16:45:14.896371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711386 ] 00:20:23.384 [2024-10-01 16:45:14.948147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.384 [2024-10-01 16:45:15.000202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.384 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.384 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:23.384 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YAo7LE1ZKx 00:20:23.644 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.904 [2024-10-01 16:45:15.484629] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.904 [2024-10-01 16:45:15.491643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:23.904 [2024-10-01 16:45:15.492678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1a10 (107): Transport endpoint is not connected 00:20:23.904 [2024-10-01 16:45:15.493674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1a10 (9): Bad file descriptor 00:20:23.904 [2024-10-01 16:45:15.494675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:23.904 [2024-10-01 16:45:15.494683] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:23.904 [2024-10-01 16:45:15.494690] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:23.904 [2024-10-01 16:45:15.494698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:23.904 request: 00:20:23.904 { 00:20:23.904 "name": "TLSTEST", 00:20:23.904 "trtype": "tcp", 00:20:23.904 "traddr": "10.0.0.2", 00:20:23.904 "adrfam": "ipv4", 00:20:23.904 "trsvcid": "4420", 00:20:23.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.904 "prchk_reftag": false, 00:20:23.904 "prchk_guard": false, 00:20:23.904 "hdgst": false, 00:20:23.904 "ddgst": false, 00:20:23.904 "psk": "key0", 00:20:23.904 "allow_unrecognized_csi": false, 00:20:23.904 "method": "bdev_nvme_attach_controller", 00:20:23.904 "req_id": 1 00:20:23.904 } 00:20:23.904 Got JSON-RPC error response 00:20:23.904 response: 00:20:23.904 { 00:20:23.904 "code": -5, 00:20:23.904 "message": "Input/output error" 00:20:23.904 } 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2711386 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2711386 ']' 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2711386 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2711386 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2711386' 00:20:23.904 killing process with pid 2711386 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2711386 00:20:23.904 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.904 00:20:23.904 Latency(us) 00:20:23.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.904 =================================================================================================================== 00:20:23.904 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.904 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2711386 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7a9N5QsfNx 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7a9N5QsfNx 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7a9N5QsfNx 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7a9N5QsfNx 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2711413 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2711413 /var/tmp/bdevperf.sock 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2711413 ']' 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.164 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.164 [2024-10-01 16:45:15.733385] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:24.164 [2024-10-01 16:45:15.733436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711413 ] 00:20:24.164 [2024-10-01 16:45:15.783484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.164 [2024-10-01 16:45:15.836217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.424 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.424 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.424 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7a9N5QsfNx 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:24.684 [2024-10-01 16:45:16.304804] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.684 [2024-10-01 16:45:16.313551] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.684 [2024-10-01 16:45:16.313572] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.684 [2024-10-01 16:45:16.313592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:24.684 [2024-10-01 16:45:16.314027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6da10 (107): Transport endpoint is not connected 00:20:24.684 [2024-10-01 16:45:16.315022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6da10 (9): Bad file descriptor 00:20:24.684 [2024-10-01 16:45:16.316023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.684 [2024-10-01 16:45:16.316035] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:24.684 [2024-10-01 16:45:16.316042] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:24.684 [2024-10-01 16:45:16.316050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.684 request: 00:20:24.684 { 00:20:24.684 "name": "TLSTEST", 00:20:24.684 "trtype": "tcp", 00:20:24.684 "traddr": "10.0.0.2", 00:20:24.684 "adrfam": "ipv4", 00:20:24.684 "trsvcid": "4420", 00:20:24.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:24.684 "prchk_reftag": false, 00:20:24.684 "prchk_guard": false, 00:20:24.684 "hdgst": false, 00:20:24.684 "ddgst": false, 00:20:24.684 "psk": "key0", 00:20:24.684 "allow_unrecognized_csi": false, 00:20:24.684 "method": "bdev_nvme_attach_controller", 00:20:24.684 "req_id": 1 00:20:24.684 } 00:20:24.684 Got JSON-RPC error response 00:20:24.684 response: 00:20:24.684 { 00:20:24.684 "code": -5, 00:20:24.684 "message": "Input/output error" 00:20:24.684 } 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2711413 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2711413 ']' 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2711413 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.684 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2711413 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2711413' 00:20:24.944 killing process with pid 2711413 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2711413 00:20:24.944 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.944 00:20:24.944 Latency(us) 00:20:24.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.944 =================================================================================================================== 00:20:24.944 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2711413 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7a9N5QsfNx 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7a9N5QsfNx 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7a9N5QsfNx 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7a9N5QsfNx 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2711700 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2711700 /var/tmp/bdevperf.sock 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2711700 ']' 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.944 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.944 [2024-10-01 16:45:16.553254] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:24.944 [2024-10-01 16:45:16.553306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711700 ] 00:20:24.944 [2024-10-01 16:45:16.602979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.205 [2024-10-01 16:45:16.655120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.205 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.205 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.205 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7a9N5QsfNx 00:20:25.465 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.465 [2024-10-01 16:45:17.123646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.465 [2024-10-01 16:45:17.127913] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.465 [2024-10-01 16:45:17.127935] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.465 [2024-10-01 16:45:17.127955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.465 [2024-10-01 16:45:17.128640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104aa10 (107): Transport endpoint is not connected 00:20:25.465 [2024-10-01 16:45:17.129635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104aa10 (9): Bad file descriptor 00:20:25.465 [2024-10-01 16:45:17.130637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:25.465 [2024-10-01 16:45:17.130645] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:25.465 [2024-10-01 16:45:17.130651] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:25.465 [2024-10-01 16:45:17.130659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:25.465 request: 00:20:25.465 { 00:20:25.465 "name": "TLSTEST", 00:20:25.465 "trtype": "tcp", 00:20:25.465 "traddr": "10.0.0.2", 00:20:25.465 "adrfam": "ipv4", 00:20:25.465 "trsvcid": "4420", 00:20:25.465 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.465 "prchk_reftag": false, 00:20:25.465 "prchk_guard": false, 00:20:25.465 "hdgst": false, 00:20:25.465 "ddgst": false, 00:20:25.465 "psk": "key0", 00:20:25.465 "allow_unrecognized_csi": false, 00:20:25.465 "method": "bdev_nvme_attach_controller", 00:20:25.465 "req_id": 1 00:20:25.465 } 00:20:25.465 Got JSON-RPC error response 00:20:25.465 response: 00:20:25.465 { 00:20:25.465 "code": -5, 00:20:25.465 "message": "Input/output error" 00:20:25.465 } 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2711700 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2711700 ']' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2711700 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2711700 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2711700' 00:20:25.725 killing process with pid 2711700 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2711700 00:20:25.725 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.725 00:20:25.725 Latency(us) 00:20:25.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.725 =================================================================================================================== 00:20:25.725 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2711700 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.725 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2711727 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2711727 /var/tmp/bdevperf.sock 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2711727 ']' 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.726 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.726 [2024-10-01 16:45:17.385433] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:25.726 [2024-10-01 16:45:17.385485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711727 ] 00:20:25.985 [2024-10-01 16:45:17.436574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.985 [2024-10-01 16:45:17.488836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.985 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.985 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.985 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:26.245 [2024-10-01 16:45:17.748937] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:26.245 [2024-10-01 16:45:17.748965] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:26.245 request: 00:20:26.245 { 00:20:26.245 "name": "key0", 00:20:26.245 "path": "", 00:20:26.245 "method": "keyring_file_add_key", 00:20:26.245 "req_id": 1 00:20:26.245 } 00:20:26.245 Got JSON-RPC error response 00:20:26.245 response: 00:20:26.245 { 00:20:26.245 "code": -1, 00:20:26.245 "message": "Operation not permitted" 00:20:26.245 } 00:20:26.245 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.506 [2024-10-01 16:45:17.949517] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.506 [2024-10-01 16:45:17.949539] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:26.506 request: 00:20:26.506 { 00:20:26.506 "name": "TLSTEST", 00:20:26.506 "trtype": "tcp", 00:20:26.506 "traddr": "10.0.0.2", 00:20:26.506 "adrfam": "ipv4", 00:20:26.506 "trsvcid": "4420", 00:20:26.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.506 "prchk_reftag": false, 00:20:26.506 "prchk_guard": false, 00:20:26.506 "hdgst": false, 00:20:26.506 "ddgst": false, 00:20:26.506 "psk": "key0", 00:20:26.506 "allow_unrecognized_csi": false, 00:20:26.506 "method": "bdev_nvme_attach_controller", 00:20:26.506 "req_id": 1 00:20:26.506 } 00:20:26.506 Got JSON-RPC error response 00:20:26.506 response: 00:20:26.506 { 00:20:26.506 "code": -126, 00:20:26.506 "message": "Required key not available" 00:20:26.506 } 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2711727 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2711727 ']' 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2711727 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.506 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2711727 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2711727' 00:20:26.506 killing process with pid 2711727 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2711727 00:20:26.506 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.506 00:20:26.506 Latency(us) 00:20:26.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.506 =================================================================================================================== 00:20:26.506 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2711727 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2706505 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2706505 ']' 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2706505 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2706505 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2706505' 00:20:26.766 killing process with pid 2706505 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2706505 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2706505 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qEkjAqMy6V 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:26.766 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qEkjAqMy6V 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2712042 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2712042 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2712042 ']' 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.767 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.767 [2024-10-01 16:45:18.440498] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:26.767 [2024-10-01 16:45:18.440550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.027 [2024-10-01 16:45:18.497234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.027 [2024-10-01 16:45:18.549347] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.027 [2024-10-01 16:45:18.549382] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.027 [2024-10-01 16:45:18.549388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.027 [2024-10-01 16:45:18.549394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.027 [2024-10-01 16:45:18.549399] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.027 [2024-10-01 16:45:18.549417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qEkjAqMy6V 00:20:27.027 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.287 [2024-10-01 16:45:18.854481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.287 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:27.546 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:27.806 [2024-10-01 16:45:19.251467] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.806 [2024-10-01 16:45:19.251662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.806 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:27.806 malloc0 00:20:27.806 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:28.066 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:28.325 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qEkjAqMy6V 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qEkjAqMy6V 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2712322 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2712322 /var/tmp/bdevperf.sock 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2712322 ']' 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.585 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.585 [2024-10-01 16:45:20.154426] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:28.585 [2024-10-01 16:45:20.154481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712322 ] 00:20:28.585 [2024-10-01 16:45:20.205129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.585 [2024-10-01 16:45:20.258192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.845 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.845 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:28.845 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:29.105 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.105 [2024-10-01 16:45:20.718813] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.364 TLSTESTn1 00:20:29.364 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.364 Running I/O for 10 seconds... 00:20:39.638 2146.00 IOPS, 8.38 MiB/s 1872.50 IOPS, 7.31 MiB/s 1768.67 IOPS, 6.91 MiB/s 2019.50 IOPS, 7.89 MiB/s 1967.20 IOPS, 7.68 MiB/s 1929.17 IOPS, 7.54 MiB/s 1897.14 IOPS, 7.41 MiB/s 2048.25 IOPS, 8.00 MiB/s 2029.67 IOPS, 7.93 MiB/s 2006.40 IOPS, 7.84 MiB/s 00:20:39.638 Latency(us) 00:20:39.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.638 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.638 Verification LBA range: start 0x0 length 0x2000 00:20:39.638 TLSTESTn1 : 10.10 2000.09 7.81 0.00 0.00 63727.52 6276.33 91952.05 00:20:39.638 =================================================================================================================== 00:20:39.638 Total : 2000.09 7.81 0.00 0.00 63727.52 6276.33 91952.05 00:20:39.638 { 00:20:39.638 "results": [ 00:20:39.638 { 00:20:39.638 "job": "TLSTESTn1", 00:20:39.638 "core_mask": "0x4", 00:20:39.638 "workload": "verify", 00:20:39.638 "status": "finished", 00:20:39.638 "verify_range": { 00:20:39.638 "start": 0, 00:20:39.638 "length": 8192 00:20:39.638 }, 00:20:39.638 "queue_depth": 128, 00:20:39.638 "io_size": 4096, 00:20:39.638 "runtime": 10.095058, 00:20:39.638 "iops": 2000.0875675999089, 00:20:39.638 "mibps": 7.812842060937144, 00:20:39.638 "io_failed": 0, 00:20:39.638 "io_timeout": 0, 00:20:39.638 "avg_latency_us": 63727.52306549376, 00:20:39.638 "min_latency_us": 6276.332307692308, 00:20:39.638 "max_latency_us": 91952.04923076923 00:20:39.638 } 00:20:39.638 ], 00:20:39.638 "core_count": 1 00:20:39.638 } 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2712322 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2712322 ']' 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2712322 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2712322 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2712322' 00:20:39.638 killing process with pid 2712322 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2712322 00:20:39.638 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.638 00:20:39.638 Latency(us) 00:20:39.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.638 =================================================================================================================== 00:20:39.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2712322 00:20:39.638 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qEkjAqMy6V 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qEkjAqMy6V 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qEkjAqMy6V 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qEkjAqMy6V 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qEkjAqMy6V 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2714160 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2714160 /var/tmp/bdevperf.sock 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2714160 ']' 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.639 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.639 [2024-10-01 16:45:31.279060] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:39.639 [2024-10-01 16:45:31.279113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714160 ] 00:20:39.899 [2024-10-01 16:45:31.329954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.899 [2024-10-01 16:45:31.382044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.899 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.899 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:39.899 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:40.159 [2024-10-01 16:45:31.642154] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qEkjAqMy6V': 0100666 00:20:40.159 [2024-10-01 16:45:31.642182] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:40.159 request: 00:20:40.159 { 00:20:40.159 "name": "key0", 00:20:40.159 "path": "/tmp/tmp.qEkjAqMy6V", 00:20:40.159 "method": "keyring_file_add_key", 00:20:40.159 "req_id": 1 00:20:40.159 } 00:20:40.159 Got JSON-RPC error response 00:20:40.159 response: 00:20:40.159 { 00:20:40.159 "code": -1, 00:20:40.159 "message": "Operation not permitted" 00:20:40.159 } 00:20:40.159 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.420 [2024-10-01 16:45:31.850744] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.420 [2024-10-01 16:45:31.850766] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:40.420 request: 00:20:40.420 { 00:20:40.420 "name": "TLSTEST", 00:20:40.420 "trtype": "tcp", 00:20:40.420 "traddr": "10.0.0.2", 00:20:40.420 "adrfam": "ipv4", 00:20:40.420 "trsvcid": "4420", 00:20:40.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.420 "prchk_reftag": false, 00:20:40.420 "prchk_guard": false, 00:20:40.420 "hdgst": false, 00:20:40.420 "ddgst": false, 00:20:40.420 "psk": "key0", 00:20:40.420 "allow_unrecognized_csi": false, 00:20:40.420 "method": "bdev_nvme_attach_controller", 00:20:40.420 "req_id": 1 00:20:40.420 } 00:20:40.420 Got JSON-RPC error response 00:20:40.420 response: 00:20:40.420 { 00:20:40.420 "code": -126, 00:20:40.420 "message": "Required key not available" 00:20:40.420 } 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2714160 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2714160 ']' 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2714160 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2714160 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2714160' 00:20:40.420 killing process with pid 2714160 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2714160 00:20:40.420 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.420 00:20:40.420 Latency(us) 00:20:40.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.420 =================================================================================================================== 00:20:40.420 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.420 16:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2714160 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2712042 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2712042 ']' 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2712042 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2712042 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2712042' 00:20:40.420 killing process with pid 2712042 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2712042 00:20:40.420 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2712042 00:20:40.679 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:40.679 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:40.679 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.679 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2714227 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2714227 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2714227 ']' 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.680 16:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.680 [2024-10-01 16:45:32.278028] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:40.680 [2024-10-01 16:45:32.278077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.680 [2024-10-01 16:45:32.333916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.940 [2024-10-01 16:45:32.386599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.940 [2024-10-01 16:45:32.386629] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.940 [2024-10-01 16:45:32.386635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.940 [2024-10-01 16:45:32.386640] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.940 [2024-10-01 16:45:32.386644] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.940 [2024-10-01 16:45:32.386660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qEkjAqMy6V 00:20:41.509 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.767 [2024-10-01 16:45:33.332934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.767 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.026 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.285 [2024-10-01 16:45:33.749962] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.285 [2024-10-01 16:45:33.750160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.285 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.543 malloc0 00:20:42.543 16:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.543 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:42.802 [2024-10-01 16:45:34.378012] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qEkjAqMy6V': 0100666 00:20:42.802 [2024-10-01 16:45:34.378036] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:42.802 request: 00:20:42.802 { 00:20:42.802 "name": "key0", 00:20:42.802 "path": "/tmp/tmp.qEkjAqMy6V", 00:20:42.802 "method": "keyring_file_add_key", 00:20:42.802 "req_id": 1 00:20:42.802 } 00:20:42.802 Got JSON-RPC error response 00:20:42.802 response: 00:20:42.802 { 00:20:42.802 "code": -1, 00:20:42.802 "message": "Operation not permitted" 00:20:42.802 } 00:20:42.802 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:43.061 [2024-10-01 16:45:34.586542] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:43.061 [2024-10-01 16:45:34.586571] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:43.061 request: 00:20:43.061 { 00:20:43.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.061 "host": "nqn.2016-06.io.spdk:host1", 00:20:43.061 "psk": "key0", 00:20:43.061 "method": "nvmf_subsystem_add_host", 00:20:43.061 "req_id": 1 00:20:43.061 } 00:20:43.061 Got JSON-RPC error response 00:20:43.061 response: 00:20:43.061 { 00:20:43.061 "code": -32603, 00:20:43.061 "message": "Internal error" 00:20:43.061 } 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2714227 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2714227 ']' 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2714227 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2714227 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2714227' 00:20:43.061 killing process with pid 2714227 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2714227 00:20:43.061 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2714227 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qEkjAqMy6V 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2714759 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2714759 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2714759 ']' 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.320 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.321 16:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.321 [2024-10-01 16:45:34.847145] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:43.321 [2024-10-01 16:45:34.847198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.321 [2024-10-01 16:45:34.903713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.321 [2024-10-01 16:45:34.960291] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.321 [2024-10-01 16:45:34.960325] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.321 [2024-10-01 16:45:34.960331] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.321 [2024-10-01 16:45:34.960336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.321 [2024-10-01 16:45:34.960340] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.321 [2024-10-01 16:45:34.960355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qEkjAqMy6V 00:20:43.580 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.839 [2024-10-01 16:45:35.269639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.839 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.839 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:44.098 [2024-10-01 16:45:35.678652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.098 [2024-10-01 16:45:35.678835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.098 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.358 malloc0 00:20:44.358 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.618 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2715042 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2715042 /var/tmp/bdevperf.sock 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2715042 ']' 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.877 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.877 [2024-10-01 16:45:36.544066] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:44.877 [2024-10-01 16:45:36.544119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715042 ] 00:20:45.137 [2024-10-01 16:45:36.594892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.137 [2024-10-01 16:45:36.647941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.137 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.137 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:45.137 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:20:45.396 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.656 [2024-10-01 16:45:37.120670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.656 TLSTESTn1 00:20:45.656 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.915 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:45.915 "subsystems": [ 00:20:45.915 { 00:20:45.915 "subsystem": "keyring", 00:20:45.915 "config": [ 00:20:45.915 { 00:20:45.915 "method": "keyring_file_add_key", 00:20:45.915 "params": { 00:20:45.915 "name": "key0", 00:20:45.915 "path": "/tmp/tmp.qEkjAqMy6V" 00:20:45.915 } 00:20:45.915 } 00:20:45.915 ] 00:20:45.915 }, 00:20:45.915 { 00:20:45.915 "subsystem": "iobuf", 00:20:45.915 "config": [ 00:20:45.915 { 00:20:45.916 "method": "iobuf_set_options", 00:20:45.916 "params": { 00:20:45.916 "small_pool_count": 8192, 00:20:45.916 "large_pool_count": 1024, 00:20:45.916 "small_bufsize": 8192, 00:20:45.916 "large_bufsize": 135168 00:20:45.916 } 00:20:45.916 } 00:20:45.916 ] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "sock", 00:20:45.916 "config": [ 00:20:45.916 { 00:20:45.916 "method": "sock_set_default_impl", 00:20:45.916 "params": { 00:20:45.916 "impl_name": "posix" 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "sock_impl_set_options", 00:20:45.916 "params": { 00:20:45.916 "impl_name": "ssl", 00:20:45.916 "recv_buf_size": 4096, 00:20:45.916 "send_buf_size": 4096, 00:20:45.916 "enable_recv_pipe": true, 00:20:45.916 "enable_quickack": false, 00:20:45.916 "enable_placement_id": 0, 00:20:45.916 "enable_zerocopy_send_server": true, 00:20:45.916 "enable_zerocopy_send_client": false, 00:20:45.916 "zerocopy_threshold": 0, 00:20:45.916 "tls_version": 0, 00:20:45.916 "enable_ktls": false 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "sock_impl_set_options", 00:20:45.916 "params": { 00:20:45.916 "impl_name": "posix", 00:20:45.916 "recv_buf_size": 2097152, 00:20:45.916 "send_buf_size": 2097152, 00:20:45.916 "enable_recv_pipe": true, 00:20:45.916 "enable_quickack": false, 00:20:45.916 "enable_placement_id": 0, 00:20:45.916 "enable_zerocopy_send_server": true, 00:20:45.916 "enable_zerocopy_send_client": false, 00:20:45.916 "zerocopy_threshold": 0, 00:20:45.916 "tls_version": 0, 00:20:45.916 "enable_ktls": false 00:20:45.916 } 00:20:45.916 } 00:20:45.916 ] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "vmd", 00:20:45.916 "config": [] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "accel", 00:20:45.916 "config": [ 00:20:45.916 { 00:20:45.916 "method": "accel_set_options", 00:20:45.916 "params": { 00:20:45.916 "small_cache_size": 128, 00:20:45.916 "large_cache_size": 16, 00:20:45.916 "task_count": 2048, 00:20:45.916 "sequence_count": 2048, 00:20:45.916 "buf_count": 2048 00:20:45.916 } 00:20:45.916 } 00:20:45.916 ] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "bdev", 00:20:45.916 "config": [ 00:20:45.916 { 00:20:45.916 "method": "bdev_set_options", 00:20:45.916 "params": { 00:20:45.916 "bdev_io_pool_size": 65535, 00:20:45.916 "bdev_io_cache_size": 256, 00:20:45.916 "bdev_auto_examine": true, 00:20:45.916 "iobuf_small_cache_size": 128, 00:20:45.916 "iobuf_large_cache_size": 16 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_raid_set_options", 00:20:45.916 "params": { 00:20:45.916 "process_window_size_kb": 1024, 00:20:45.916 "process_max_bandwidth_mb_sec": 0 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_iscsi_set_options", 00:20:45.916 "params": { 00:20:45.916 "timeout_sec": 30 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_nvme_set_options", 00:20:45.916 "params": { 00:20:45.916 "action_on_timeout": "none", 00:20:45.916 "timeout_us": 0, 00:20:45.916 "timeout_admin_us": 0, 00:20:45.916 "keep_alive_timeout_ms": 10000, 00:20:45.916 "arbitration_burst": 0, 00:20:45.916 "low_priority_weight": 0, 00:20:45.916 "medium_priority_weight": 0, 00:20:45.916 "high_priority_weight": 0, 00:20:45.916 "nvme_adminq_poll_period_us": 10000, 00:20:45.916 "nvme_ioq_poll_period_us": 0, 00:20:45.916 "io_queue_requests": 0, 00:20:45.916 "delay_cmd_submit": true, 00:20:45.916 "transport_retry_count": 4, 00:20:45.916 "bdev_retry_count": 3, 00:20:45.916 "transport_ack_timeout": 0, 00:20:45.916 "ctrlr_loss_timeout_sec": 0, 00:20:45.916 "reconnect_delay_sec": 0, 00:20:45.916 "fast_io_fail_timeout_sec": 0, 00:20:45.916 "disable_auto_failback": false, 00:20:45.916 "generate_uuids": false, 00:20:45.916 "transport_tos": 0, 00:20:45.916 "nvme_error_stat": false, 00:20:45.916 "rdma_srq_size": 0, 00:20:45.916 "io_path_stat": false, 00:20:45.916 "allow_accel_sequence": false, 00:20:45.916 "rdma_max_cq_size": 0, 00:20:45.916 "rdma_cm_event_timeout_ms": 0, 00:20:45.916 "dhchap_digests": [ 00:20:45.916 "sha256", 00:20:45.916 "sha384", 00:20:45.916 "sha512" 00:20:45.916 ], 00:20:45.916 "dhchap_dhgroups": [ 00:20:45.916 "null", 00:20:45.916 "ffdhe2048", 00:20:45.916 "ffdhe3072", 00:20:45.916 "ffdhe4096", 00:20:45.916 "ffdhe6144", 00:20:45.916 "ffdhe8192" 00:20:45.916 ] 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_nvme_set_hotplug", 00:20:45.916 "params": { 00:20:45.916 "period_us": 100000, 00:20:45.916 "enable": false 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_malloc_create", 00:20:45.916 "params": { 00:20:45.916 "name": "malloc0", 00:20:45.916 "num_blocks": 8192, 00:20:45.916 "block_size": 4096, 00:20:45.916 "physical_block_size": 4096, 00:20:45.916 "uuid": "1fbff5a7-4952-49ee-8665-a6b9d1523ac8", 00:20:45.916 "optimal_io_boundary": 0, 00:20:45.916 "md_size": 0, 00:20:45.916 "dif_type": 0, 00:20:45.916 "dif_is_head_of_md": false, 00:20:45.916 "dif_pi_format": 0 00:20:45.916 } 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "method": "bdev_wait_for_examine" 00:20:45.916 } 00:20:45.916 ] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "nbd", 00:20:45.916 "config": [] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "scheduler", 00:20:45.916 "config": [ 00:20:45.916 { 00:20:45.916 "method": "framework_set_scheduler", 00:20:45.916 "params": { 00:20:45.916 "name": "static" 00:20:45.916 } 00:20:45.916 } 00:20:45.916 ] 00:20:45.916 }, 00:20:45.916 { 00:20:45.916 "subsystem": "nvmf", 00:20:45.916 "config": [ 00:20:45.916 { 00:20:45.916 "method": "nvmf_set_config", 00:20:45.916 "params": { 00:20:45.916 "discovery_filter": "match_any", 00:20:45.916 "admin_cmd_passthru": { 00:20:45.916 "identify_ctrlr": false 00:20:45.916 }, 00:20:45.916 "dhchap_digests": [ 00:20:45.916 "sha256", 00:20:45.916 "sha384", 00:20:45.916 "sha512" 00:20:45.916 ], 00:20:45.916 "dhchap_dhgroups": [ 00:20:45.916 "null", 00:20:45.917 "ffdhe2048", 00:20:45.917 "ffdhe3072", 00:20:45.917 "ffdhe4096", 00:20:45.917 "ffdhe6144", 00:20:45.917 "ffdhe8192" 00:20:45.917 ] 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_set_max_subsystems", 00:20:45.917 "params": { 00:20:45.917 "max_subsystems": 1024 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_set_crdt", 00:20:45.917 "params": { 00:20:45.917 "crdt1": 0, 00:20:45.917 "crdt2": 0, 00:20:45.917 "crdt3": 0 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_create_transport", 00:20:45.917 "params": { 00:20:45.917 "trtype": "TCP", 00:20:45.917 "max_queue_depth": 128, 00:20:45.917 "max_io_qpairs_per_ctrlr": 127, 00:20:45.917 "in_capsule_data_size": 4096, 00:20:45.917 "max_io_size": 131072, 00:20:45.917 "io_unit_size": 131072, 00:20:45.917 "max_aq_depth": 128, 00:20:45.917 "num_shared_buffers": 511, 00:20:45.917 "buf_cache_size": 4294967295, 00:20:45.917 "dif_insert_or_strip": false, 00:20:45.917 "zcopy": false, 00:20:45.917 "c2h_success": false, 00:20:45.917 "sock_priority": 0, 00:20:45.917 "abort_timeout_sec": 1, 00:20:45.917 "ack_timeout": 0, 00:20:45.917 "data_wr_pool_size": 0 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_create_subsystem", 00:20:45.917 "params": { 00:20:45.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.917 "allow_any_host": false, 00:20:45.917 "serial_number": "SPDK00000000000001", 00:20:45.917 "model_number": "SPDK bdev Controller", 00:20:45.917 "max_namespaces": 10, 00:20:45.917 "min_cntlid": 1, 00:20:45.917 "max_cntlid": 65519, 00:20:45.917 "ana_reporting": false 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_subsystem_add_host", 00:20:45.917 "params": { 00:20:45.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.917 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.917 "psk": "key0" 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_subsystem_add_ns", 00:20:45.917 "params": { 00:20:45.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.917 "namespace": { 00:20:45.917 "nsid": 1, 00:20:45.917 "bdev_name": "malloc0", 00:20:45.917 "nguid": "1FBFF5A7495249EE8665A6B9D1523AC8", 00:20:45.917 "uuid": "1fbff5a7-4952-49ee-8665-a6b9d1523ac8", 00:20:45.917 "no_auto_visible": false 00:20:45.917 } 00:20:45.917 } 00:20:45.917 }, 00:20:45.917 { 00:20:45.917 "method": "nvmf_subsystem_add_listener", 00:20:45.917 "params": { 00:20:45.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.917 "listen_address": { 00:20:45.917 "trtype": "TCP", 00:20:45.917 "adrfam": "IPv4", 00:20:45.917 "traddr": "10.0.0.2", 00:20:45.917 "trsvcid": "4420" 00:20:45.917 }, 00:20:45.917 "secure_channel": true 00:20:45.917 } 00:20:45.917 } 00:20:45.917 ] 00:20:45.917 } 00:20:45.917 ] 00:20:45.917 }' 00:20:45.917 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:46.178 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:46.178 "subsystems": [ 00:20:46.178 { 00:20:46.178 "subsystem": "keyring", 00:20:46.178 "config": [ 00:20:46.178 { 00:20:46.178 "method": "keyring_file_add_key", 00:20:46.178 "params": { 00:20:46.178 "name": "key0", 00:20:46.178 "path": "/tmp/tmp.qEkjAqMy6V" 00:20:46.178 } 00:20:46.178 } 00:20:46.178 ] 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "subsystem": "iobuf", 00:20:46.178 "config": [ 00:20:46.178 { 00:20:46.178 "method": "iobuf_set_options", 00:20:46.178 "params": { 00:20:46.178 "small_pool_count": 8192, 00:20:46.178 "large_pool_count": 1024, 00:20:46.178 "small_bufsize": 8192, 00:20:46.178 "large_bufsize": 135168 00:20:46.178 } 00:20:46.178 } 00:20:46.178 ] 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "subsystem": "sock", 00:20:46.178 "config": [ 00:20:46.178 { 00:20:46.178 "method": "sock_set_default_impl", 00:20:46.178 "params": { 00:20:46.178 "impl_name": "posix" 00:20:46.178 } 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "method": "sock_impl_set_options", 00:20:46.178 "params": { 00:20:46.178 "impl_name": "ssl", 00:20:46.178 "recv_buf_size": 4096, 00:20:46.178 "send_buf_size": 4096, 00:20:46.178 "enable_recv_pipe": true, 00:20:46.178 "enable_quickack": false, 00:20:46.178 "enable_placement_id": 0, 00:20:46.178 "enable_zerocopy_send_server": true, 00:20:46.178 "enable_zerocopy_send_client": false, 00:20:46.178 "zerocopy_threshold": 0, 00:20:46.178 "tls_version": 0, 00:20:46.178 "enable_ktls": false 00:20:46.178 } 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "method": "sock_impl_set_options", 00:20:46.178 "params": { 00:20:46.178 "impl_name": "posix", 00:20:46.178 "recv_buf_size": 2097152, 00:20:46.178 "send_buf_size": 2097152, 00:20:46.178 "enable_recv_pipe": true, 00:20:46.178 "enable_quickack": false, 00:20:46.178 "enable_placement_id": 0, 00:20:46.178 "enable_zerocopy_send_server": true, 00:20:46.178 "enable_zerocopy_send_client": false, 00:20:46.178 "zerocopy_threshold": 0, 00:20:46.178 "tls_version": 0, 00:20:46.178 "enable_ktls": false 00:20:46.178 } 00:20:46.178 } 00:20:46.178 ] 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "subsystem": "vmd", 00:20:46.178 "config": [] 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "subsystem": "accel", 00:20:46.178 "config": [ 00:20:46.178 { 00:20:46.178 "method": "accel_set_options", 00:20:46.178 "params": { 00:20:46.178 "small_cache_size": 128, 00:20:46.178 "large_cache_size": 16, 00:20:46.178 "task_count": 2048, 00:20:46.178 "sequence_count": 2048, 00:20:46.178 "buf_count": 2048 00:20:46.178 } 00:20:46.178 } 00:20:46.178 ] 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "subsystem": "bdev", 00:20:46.178 "config": [ 00:20:46.178 { 00:20:46.178 "method": "bdev_set_options", 00:20:46.178 "params": { 00:20:46.178 "bdev_io_pool_size": 65535, 00:20:46.178 "bdev_io_cache_size": 256, 00:20:46.178 "bdev_auto_examine": true, 00:20:46.178 "iobuf_small_cache_size": 128, 00:20:46.178 "iobuf_large_cache_size": 16 00:20:46.178 } 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "method": "bdev_raid_set_options", 00:20:46.178 "params": { 00:20:46.178 "process_window_size_kb": 1024, 00:20:46.178 "process_max_bandwidth_mb_sec": 0 00:20:46.178 } 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "method": "bdev_iscsi_set_options", 00:20:46.178 "params": { 00:20:46.178 "timeout_sec": 30 00:20:46.178 } 00:20:46.178 }, 00:20:46.178 { 00:20:46.178 "method": "bdev_nvme_set_options", 00:20:46.178 "params": { 00:20:46.178 "action_on_timeout": "none", 00:20:46.178 "timeout_us": 0, 00:20:46.178 "timeout_admin_us": 0, 00:20:46.178 "keep_alive_timeout_ms": 10000, 00:20:46.178 "arbitration_burst": 0, 00:20:46.178 "low_priority_weight": 0, 00:20:46.178 "medium_priority_weight": 0, 00:20:46.178 "high_priority_weight": 0, 00:20:46.178 "nvme_adminq_poll_period_us": 10000, 00:20:46.178 "nvme_ioq_poll_period_us": 0, 00:20:46.178 "io_queue_requests": 512, 00:20:46.178 "delay_cmd_submit": true, 00:20:46.178 "transport_retry_count": 4, 00:20:46.178 "bdev_retry_count": 3, 00:20:46.178 "transport_ack_timeout": 0, 00:20:46.178 "ctrlr_loss_timeout_sec": 0, 00:20:46.178 "reconnect_delay_sec": 0, 00:20:46.178 "fast_io_fail_timeout_sec": 0, 00:20:46.178 "disable_auto_failback": false, 00:20:46.178 "generate_uuids": false, 00:20:46.178 "transport_tos": 0, 00:20:46.178 "nvme_error_stat": false, 00:20:46.178 "rdma_srq_size": 0, 00:20:46.178 "io_path_stat": false, 00:20:46.178 "allow_accel_sequence": false, 00:20:46.178 "rdma_max_cq_size": 0, 00:20:46.178 "rdma_cm_event_timeout_ms": 0, 00:20:46.178 "dhchap_digests": [ 00:20:46.178 "sha256", 00:20:46.178 "sha384", 00:20:46.178 "sha512" 00:20:46.178 ], 00:20:46.178 "dhchap_dhgroups": [ 00:20:46.178 "null", 00:20:46.178 "ffdhe2048", 00:20:46.178 "ffdhe3072", 00:20:46.178 "ffdhe4096", 00:20:46.178 "ffdhe6144", 00:20:46.178 "ffdhe8192" 00:20:46.179 ] 00:20:46.179 } 00:20:46.179 }, 00:20:46.179 { 00:20:46.179 "method": "bdev_nvme_attach_controller", 00:20:46.179 "params": { 00:20:46.179 "name": "TLSTEST", 00:20:46.179 "trtype": "TCP", 00:20:46.179 "adrfam": "IPv4", 00:20:46.179 "traddr": "10.0.0.2", 00:20:46.179 "trsvcid": "4420", 00:20:46.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.179 "prchk_reftag": false, 00:20:46.179 "prchk_guard": false, 00:20:46.179 "ctrlr_loss_timeout_sec": 0, 00:20:46.179 "reconnect_delay_sec": 0, 00:20:46.179 "fast_io_fail_timeout_sec": 0, 00:20:46.179 "psk": "key0", 00:20:46.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.179 "hdgst": false, 00:20:46.179 "ddgst": false 00:20:46.179 } 00:20:46.179 }, 00:20:46.179 { 00:20:46.179 "method": "bdev_nvme_set_hotplug", 00:20:46.179 "params": { 00:20:46.179 "period_us": 100000, 00:20:46.179 "enable": false 00:20:46.179 } 00:20:46.179 }, 00:20:46.179 { 00:20:46.179 "method": "bdev_wait_for_examine" 00:20:46.179 } 00:20:46.179 ] 00:20:46.179 }, 00:20:46.179 { 00:20:46.179 "subsystem": "nbd", 00:20:46.179 "config": [] 00:20:46.179 } 00:20:46.179 ] 00:20:46.179 }' 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2715042 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2715042 ']' 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2715042 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2715042 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2715042' 00:20:46.179 killing process with pid 2715042 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2715042 00:20:46.179 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.179 00:20:46.179 Latency(us) 00:20:46.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.179 =================================================================================================================== 00:20:46.179 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.179 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2715042 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2714759 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2714759 ']' 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2714759 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2714759 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2714759' 00:20:46.439 killing process with pid 2714759 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2714759 00:20:46.439 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2714759 00:20:46.439 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:46.439 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:46.439 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.439 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.439 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:46.439 "subsystems": [ 00:20:46.439 { 00:20:46.439 "subsystem": "keyring", 00:20:46.439 "config": [ 00:20:46.439 { 00:20:46.439 "method": "keyring_file_add_key", 00:20:46.439 "params": { 00:20:46.439 "name": "key0", 00:20:46.439 "path": "/tmp/tmp.qEkjAqMy6V" 00:20:46.439 } 00:20:46.439 } 00:20:46.439 ] 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "subsystem": "iobuf", 00:20:46.439 "config": [ 00:20:46.439 { 00:20:46.439 "method": "iobuf_set_options", 00:20:46.439 "params": { 00:20:46.439 "small_pool_count": 8192, 00:20:46.439 "large_pool_count": 1024, 00:20:46.439 "small_bufsize": 8192, 00:20:46.439 "large_bufsize": 135168 00:20:46.439 } 00:20:46.439 } 00:20:46.439 ] 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "subsystem": "sock", 00:20:46.439 "config": [ 00:20:46.439 { 00:20:46.439 "method": "sock_set_default_impl", 00:20:46.439 "params": { 00:20:46.439 "impl_name": "posix" 00:20:46.439 } 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "method": "sock_impl_set_options", 00:20:46.439 "params": { 00:20:46.439 "impl_name": "ssl", 00:20:46.439 "recv_buf_size": 4096, 00:20:46.439 "send_buf_size": 4096, 00:20:46.439 "enable_recv_pipe": true, 00:20:46.439 "enable_quickack": false, 00:20:46.439 "enable_placement_id": 0, 00:20:46.439 "enable_zerocopy_send_server": true, 00:20:46.439 "enable_zerocopy_send_client": false, 00:20:46.439 "zerocopy_threshold": 0, 00:20:46.439 "tls_version": 0, 00:20:46.439 "enable_ktls": false 00:20:46.439 } 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "method": "sock_impl_set_options", 00:20:46.439 "params": { 00:20:46.439 "impl_name": "posix", 00:20:46.439 "recv_buf_size": 2097152, 00:20:46.439 "send_buf_size": 2097152, 00:20:46.439 "enable_recv_pipe": true, 00:20:46.439 "enable_quickack": false, 00:20:46.439 "enable_placement_id": 0, 00:20:46.439 "enable_zerocopy_send_server": true, 00:20:46.439 "enable_zerocopy_send_client": false, 00:20:46.439 "zerocopy_threshold": 0, 00:20:46.439 "tls_version": 0, 00:20:46.439 "enable_ktls": false 00:20:46.439 } 00:20:46.439 } 00:20:46.439 ] 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "subsystem": "vmd", 00:20:46.439 "config": [] 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "subsystem": "accel", 00:20:46.439 "config": [ 00:20:46.439 { 00:20:46.439 "method": "accel_set_options", 00:20:46.439 "params": { 00:20:46.439 "small_cache_size": 128, 00:20:46.439 "large_cache_size": 16, 00:20:46.439 "task_count": 2048, 00:20:46.439 "sequence_count": 2048, 00:20:46.439 "buf_count": 2048 00:20:46.439 } 00:20:46.439 } 00:20:46.439 ] 00:20:46.439 }, 00:20:46.439 { 00:20:46.439 "subsystem": "bdev", 00:20:46.439 "config": [ 00:20:46.439 { 00:20:46.440 "method": "bdev_set_options", 00:20:46.440 "params": { 00:20:46.440 "bdev_io_pool_size": 65535, 00:20:46.440 "bdev_io_cache_size": 256, 00:20:46.440 "bdev_auto_examine": true, 00:20:46.440 "iobuf_small_cache_size": 128, 00:20:46.440 "iobuf_large_cache_size": 16 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_raid_set_options", 00:20:46.440 "params": { 00:20:46.440 "process_window_size_kb": 1024, 00:20:46.440 "process_max_bandwidth_mb_sec": 0 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_iscsi_set_options", 00:20:46.440 "params": { 00:20:46.440 "timeout_sec": 30 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_nvme_set_options", 00:20:46.440 "params": { 00:20:46.440 "action_on_timeout": "none", 00:20:46.440 "timeout_us": 0, 00:20:46.440 "timeout_admin_us": 0, 00:20:46.440 "keep_alive_timeout_ms": 10000, 00:20:46.440 "arbitration_burst": 0, 00:20:46.440 "low_priority_weight": 0, 00:20:46.440 "medium_priority_weight": 0, 00:20:46.440 "high_priority_weight": 0, 00:20:46.440 "nvme_adminq_poll_period_us": 10000, 00:20:46.440 "nvme_ioq_poll_period_us": 0, 00:20:46.440 "io_queue_requests": 0, 00:20:46.440 "delay_cmd_submit": true, 00:20:46.440 "transport_retry_count": 4, 00:20:46.440 "bdev_retry_count": 3, 00:20:46.440 "transport_ack_timeout": 0, 00:20:46.440 "ctrlr_loss_timeout_sec": 0, 00:20:46.440 "reconnect_delay_sec": 0, 00:20:46.440 "fast_io_fail_timeout_sec": 0, 00:20:46.440 "disable_auto_failback": false, 00:20:46.440 "generate_uuids": false, 00:20:46.440 "transport_tos": 0, 00:20:46.440 "nvme_error_stat": false, 00:20:46.440 "rdma_srq_size": 0, 00:20:46.440 "io_path_stat": false, 00:20:46.440 "allow_accel_sequence": false, 00:20:46.440 "rdma_max_cq_size": 0, 00:20:46.440 "rdma_cm_event_timeout_ms": 0, 00:20:46.440 "dhchap_digests": [ 00:20:46.440 "sha256", 00:20:46.440 "sha384", 00:20:46.440 "sha512" 00:20:46.440 ], 00:20:46.440 "dhchap_dhgroups": [ 00:20:46.440 "null", 00:20:46.440 "ffdhe2048", 00:20:46.440 "ffdhe3072", 00:20:46.440 "ffdhe4096", 00:20:46.440 "ffdhe6144", 00:20:46.440 "ffdhe8192" 00:20:46.440 ] 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_nvme_set_hotplug", 00:20:46.440 "params": { 00:20:46.440 "period_us": 100000, 00:20:46.440 "enable": false 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_malloc_create", 00:20:46.440 "params": { 00:20:46.440 "name": "malloc0", 00:20:46.440 "num_blocks": 8192, 00:20:46.440 "block_size": 4096, 00:20:46.440 "physical_block_size": 4096, 00:20:46.440 "uuid": "1fbff5a7-4952-49ee-8665-a6b9d1523ac8", 00:20:46.440 "optimal_io_boundary": 0, 00:20:46.440 "md_size": 0, 00:20:46.440 "dif_type": 0, 00:20:46.440 "dif_is_head_of_md": false, 00:20:46.440 "dif_pi_format": 0 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "bdev_wait_for_examine" 00:20:46.440 } 00:20:46.440 ] 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "subsystem": "nbd", 00:20:46.440 "config": [] 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "subsystem": "scheduler", 00:20:46.440 "config": [ 00:20:46.440 { 00:20:46.440 "method": "framework_set_scheduler", 00:20:46.440 "params": { 00:20:46.440 "name": "static" 00:20:46.440 } 00:20:46.440 } 00:20:46.440 ] 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "subsystem": "nvmf", 00:20:46.440 "config": [ 00:20:46.440 { 00:20:46.440 "method": "nvmf_set_config", 00:20:46.440 "params": { 00:20:46.440 "discovery_filter": "match_any", 00:20:46.440 "admin_cmd_passthru": { 00:20:46.440 "identify_ctrlr": false 00:20:46.440 }, 00:20:46.440 "dhchap_digests": [ 00:20:46.440 "sha256", 00:20:46.440 "sha384", 00:20:46.440 "sha512" 00:20:46.440 ], 00:20:46.440 "dhchap_dhgroups": [ 00:20:46.440 "null", 00:20:46.440 "ffdhe2048", 00:20:46.440 "ffdhe3072", 00:20:46.440 "ffdhe4096", 00:20:46.440 "ffdhe6144", 00:20:46.440 "ffdhe8192" 00:20:46.440 ] 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_set_max_subsystems", 00:20:46.440 "params": { 00:20:46.440 "max_subsystems": 1024 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_set_crdt", 00:20:46.440 "params": { 00:20:46.440 "crdt1": 0, 00:20:46.440 "crdt2": 0, 00:20:46.440 "crdt3": 0 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_create_transport", 00:20:46.440 "params": { 00:20:46.440 "trtype": "TCP", 00:20:46.440 "max_queue_depth": 128, 00:20:46.440 "max_io_qpairs_per_ctrlr": 127, 00:20:46.440 "in_capsule_data_size": 4096, 00:20:46.440 "max_io_size": 131072, 00:20:46.440 "io_unit_size": 131072, 00:20:46.440 "max_aq_depth": 128, 00:20:46.440 "num_shared_buffers": 511, 00:20:46.440 "buf_cache_size": 4294967295, 00:20:46.440 "dif_insert_or_strip": false, 00:20:46.440 "zcopy": false, 00:20:46.440 "c2h_success": false, 00:20:46.440 "sock_priority": 0, 00:20:46.440 "abort_timeout_sec": 1, 00:20:46.440 "ack_timeout": 0, 00:20:46.440 "data_wr_pool_size": 0 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_create_subsystem", 00:20:46.440 "params": { 00:20:46.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.440 "allow_any_host": false, 00:20:46.440 "serial_number": "SPDK00000000000001", 00:20:46.440 "model_number": "SPDK bdev Controller", 00:20:46.440 "max_namespaces": 10, 00:20:46.440 "min_cntlid": 1, 00:20:46.440 "max_cntlid": 65519, 00:20:46.440 "ana_reporting": false 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_subsystem_add_host", 00:20:46.440 "params": { 00:20:46.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.440 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.440 "psk": "key0" 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_subsystem_add_ns", 00:20:46.440 "params": { 00:20:46.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.440 "namespace": { 00:20:46.440 "nsid": 1, 00:20:46.440 "bdev_name": "malloc0", 00:20:46.440 "nguid": "1FBFF5A7495249EE8665A6B9D1523AC8", 00:20:46.440 "uuid": "1fbff5a7-4952-49ee-8665-a6b9d1523ac8", 00:20:46.440 "no_auto_visible": false 00:20:46.440 } 00:20:46.440 } 00:20:46.440 }, 00:20:46.440 { 00:20:46.440 "method": "nvmf_subsystem_add_listener", 00:20:46.440 "params": { 00:20:46.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.440 "listen_address": { 00:20:46.440 "trtype": "TCP", 00:20:46.440 "adrfam": "IPv4", 00:20:46.440 "traddr": "10.0.0.2", 00:20:46.440 "trsvcid": "4420" 00:20:46.440 }, 00:20:46.440 "secure_channel": true 00:20:46.440 } 00:20:46.440 } 00:20:46.440 ] 00:20:46.440 } 00:20:46.440 ] 00:20:46.440 }' 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2715237 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2715237 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2715237 ']' 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.440 16:45:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:46.700 [2024-10-01 16:45:38.168481] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:46.700 [2024-10-01 16:45:38.168533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.700 [2024-10-01 16:45:38.224783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.700 [2024-10-01 16:45:38.278879] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.700 [2024-10-01 16:45:38.278912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.700 [2024-10-01 16:45:38.278918] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.700 [2024-10-01 16:45:38.278925] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.700 [2024-10-01 16:45:38.278929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.700 [2024-10-01 16:45:38.278984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.960 [2024-10-01 16:45:38.480560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.960 [2024-10-01 16:45:38.512579] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.960 [2024-10-01 16:45:38.512768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.533 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2715533 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2715533 /var/tmp/bdevperf.sock 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2715533 ']' 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:47.534 "subsystems": [ 00:20:47.534 { 00:20:47.534 "subsystem": "keyring", 00:20:47.534 "config": [ 00:20:47.534 { 00:20:47.534 "method": "keyring_file_add_key", 00:20:47.534 "params": { 00:20:47.534 "name": "key0", 00:20:47.534 "path": "/tmp/tmp.qEkjAqMy6V" 00:20:47.534 } 00:20:47.534 } 00:20:47.534 ] 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "subsystem": "iobuf", 00:20:47.534 "config": [ 00:20:47.534 { 00:20:47.534 "method": "iobuf_set_options", 00:20:47.534 "params": { 00:20:47.534 "small_pool_count": 8192, 00:20:47.534 "large_pool_count": 1024, 00:20:47.534 "small_bufsize": 8192, 00:20:47.534 "large_bufsize": 135168 00:20:47.534 } 00:20:47.534 } 00:20:47.534 ] 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "subsystem": "sock", 00:20:47.534 "config": [ 00:20:47.534 { 00:20:47.534 "method": "sock_set_default_impl", 00:20:47.534 "params": { 00:20:47.534 "impl_name": "posix" 00:20:47.534 } 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "method": "sock_impl_set_options", 00:20:47.534 "params": { 00:20:47.534 "impl_name": "ssl", 00:20:47.534 "recv_buf_size": 4096, 00:20:47.534 "send_buf_size": 4096, 00:20:47.534 "enable_recv_pipe": true, 00:20:47.534 "enable_quickack": false, 00:20:47.534 "enable_placement_id": 0, 00:20:47.534 "enable_zerocopy_send_server": true, 00:20:47.534 "enable_zerocopy_send_client": false, 00:20:47.534 "zerocopy_threshold": 0, 00:20:47.534 "tls_version": 0, 00:20:47.534 "enable_ktls": false 00:20:47.534 } 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "method": "sock_impl_set_options", 00:20:47.534 "params": { 00:20:47.534 "impl_name": "posix", 00:20:47.534 "recv_buf_size": 2097152, 00:20:47.534 "send_buf_size": 2097152, 00:20:47.534 "enable_recv_pipe": true, 00:20:47.534 "enable_quickack": false, 00:20:47.534 "enable_placement_id": 0, 00:20:47.534 "enable_zerocopy_send_server": true, 00:20:47.534 "enable_zerocopy_send_client": false, 00:20:47.534 "zerocopy_threshold": 0, 00:20:47.534 "tls_version": 0, 00:20:47.534 "enable_ktls": false 00:20:47.534 } 00:20:47.534 } 00:20:47.534 ] 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "subsystem": "vmd", 00:20:47.534 "config": [] 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "subsystem": "accel", 00:20:47.534 "config": [ 00:20:47.534 { 00:20:47.534 "method": "accel_set_options", 00:20:47.534 "params": { 00:20:47.534 "small_cache_size": 128, 00:20:47.534 "large_cache_size": 16, 00:20:47.534 "task_count": 2048, 00:20:47.534 "sequence_count": 2048, 00:20:47.534 "buf_count": 2048 00:20:47.534 } 00:20:47.534 } 00:20:47.534 ] 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "subsystem": "bdev", 00:20:47.534 "config": [ 00:20:47.534 { 00:20:47.534 "method": "bdev_set_options", 00:20:47.534 "params": { 00:20:47.534 "bdev_io_pool_size": 65535, 00:20:47.534 "bdev_io_cache_size": 256, 00:20:47.534 "bdev_auto_examine": true, 00:20:47.534 "iobuf_small_cache_size": 128, 00:20:47.534 "iobuf_large_cache_size": 16 00:20:47.534 } 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "method": "bdev_raid_set_options", 00:20:47.534 "params": { 00:20:47.534 "process_window_size_kb": 1024, 00:20:47.534 "process_max_bandwidth_mb_sec": 0 00:20:47.534 } 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "method": "bdev_iscsi_set_options", 00:20:47.534 "params": { 00:20:47.534 "timeout_sec": 30 00:20:47.534 } 00:20:47.534 }, 00:20:47.534 { 00:20:47.534 "method": "bdev_nvme_set_options", 00:20:47.534 "params": { 00:20:47.534 "action_on_timeout": "none", 00:20:47.534 "timeout_us": 0, 00:20:47.534 "timeout_admin_us": 0, 00:20:47.534 "keep_alive_timeout_ms": 10000, 00:20:47.534 "arbitration_burst": 0, 00:20:47.534 "low_priority_weight": 0, 00:20:47.534 "medium_priority_weight": 0, 00:20:47.534 "high_priority_weight": 0, 00:20:47.534 "nvme_adminq_poll_period_us": 10000, 00:20:47.534 "nvme_ioq_poll_period_us": 0, 00:20:47.534 "io_queue_requests": 512, 00:20:47.534 "delay_cmd_submit": true, 00:20:47.534 "transport_retry_count": 4, 00:20:47.534 "bdev_retry_count": 3, 00:20:47.534 "transport_ack_timeout": 0, 00:20:47.535 "ctrlr_loss_timeout_sec": 0, 00:20:47.535 "reconnect_delay_sec": 0, 00:20:47.535 "fast_io_fail_timeout_sec": 0, 00:20:47.535 "disable_auto_failback": false, 00:20:47.535 "generate_uuids": false, 00:20:47.535 "transport_tos": 0, 00:20:47.535 "nvme_error_stat": false, 00:20:47.535 "rdma_srq_size": 0, 00:20:47.535 "io_path_stat": false, 00:20:47.535 "allow_accel_sequence": false, 00:20:47.535 "rdma_max_cq_size": 0, 00:20:47.535 "rdma_cm_event_timeout_ms": 0, 00:20:47.535 "dhchap_digests": [ 00:20:47.535 "sha256", 00:20:47.535 "sha384", 00:20:47.535 "sha512" 00:20:47.535 ], 00:20:47.535 "dhchap_dhgroups": [ 00:20:47.535 "null", 00:20:47.535 "ffdhe2048", 00:20:47.535 "ffdhe3072", 00:20:47.535 "ffdhe4096", 00:20:47.535 "ffdhe6144", 00:20:47.535 "ffdhe8192" 00:20:47.535 ] 00:20:47.535 } 00:20:47.535 }, 00:20:47.535 { 00:20:47.535 "method": "bdev_nvme_attach_controller", 00:20:47.535 "params": { 00:20:47.535 "name": "TLSTEST", 00:20:47.535 "trtype": "TCP", 00:20:47.535 "adrfam": "IPv4", 00:20:47.535 "traddr": "10.0.0.2", 00:20:47.535 "trsvcid": "4420", 00:20:47.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.535 "prchk_reftag": false, 00:20:47.535 "prchk_guard": false, 00:20:47.535 "ctrlr_loss_timeout_sec": 0, 00:20:47.535 "reconnect_delay_sec": 0, 00:20:47.535 "fast_io_fail_timeout_sec": 0, 00:20:47.535 "psk": "key0", 00:20:47.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.535 "hdgst": false, 00:20:47.535 "ddgst": false 00:20:47.535 } 00:20:47.535 }, 00:20:47.535 { 00:20:47.535 "method": "bdev_nvme_set_hotplug", 00:20:47.535 "params": { 00:20:47.535 "period_us": 100000, 00:20:47.535 "enable": false 00:20:47.535 } 00:20:47.535 }, 00:20:47.535 { 00:20:47.535 "method": "bdev_wait_for_examine" 00:20:47.535 } 00:20:47.535 ] 00:20:47.535 }, 00:20:47.535 { 00:20:47.535 "subsystem": "nbd", 00:20:47.535 "config": [] 00:20:47.535 } 00:20:47.535 ] 00:20:47.535 }' 00:20:47.535 [2024-10-01 16:45:39.125114] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:47.535 [2024-10-01 16:45:39.125165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715533 ] 00:20:47.535 [2024-10-01 16:45:39.175756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.800 [2024-10-01 16:45:39.228576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.800 [2024-10-01 16:45:39.361983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.369 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.369 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:48.369 16:45:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.629 Running I/O for 10 seconds... 00:20:58.535 3747.00 IOPS, 14.64 MiB/s 3367.00 IOPS, 13.15 MiB/s 3751.33 IOPS, 14.65 MiB/s 3318.00 IOPS, 12.96 MiB/s 3601.40 IOPS, 14.07 MiB/s 3818.33 IOPS, 14.92 MiB/s 3654.57 IOPS, 14.28 MiB/s 3751.50 IOPS, 14.65 MiB/s 3889.11 IOPS, 15.19 MiB/s 4012.50 IOPS, 15.67 MiB/s 00:20:58.535 Latency(us) 00:20:58.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.535 Verification LBA range: start 0x0 length 0x2000 00:20:58.535 TLSTESTn1 : 10.01 4020.48 15.71 0.00 0.00 31800.84 4839.58 78643.20 00:20:58.535 =================================================================================================================== 00:20:58.535 Total : 4020.48 15.71 0.00 0.00 31800.84 4839.58 78643.20 00:20:58.535 { 00:20:58.535 "results": [ 00:20:58.535 { 00:20:58.535 "job": "TLSTESTn1", 00:20:58.535 "core_mask": "0x4", 00:20:58.535 "workload": "verify", 00:20:58.535 "status": "finished", 00:20:58.535 "verify_range": { 00:20:58.535 "start": 0, 00:20:58.535 "length": 8192 00:20:58.535 }, 00:20:58.535 "queue_depth": 128, 00:20:58.535 "io_size": 4096, 00:20:58.535 "runtime": 10.011728, 00:20:58.535 "iops": 4020.484775455346, 00:20:58.535 "mibps": 15.705018654122446, 00:20:58.535 "io_failed": 0, 00:20:58.535 "io_timeout": 0, 00:20:58.535 "avg_latency_us": 31800.83595685642, 00:20:58.535 "min_latency_us": 4839.581538461539, 00:20:58.535 "max_latency_us": 78643.2 00:20:58.535 } 00:20:58.535 ], 00:20:58.535 "core_count": 1 00:20:58.535 } 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2715533 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2715533 ']' 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2715533 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2715533 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2715533' 00:20:58.535 killing process with pid 2715533 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2715533 00:20:58.535 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.535 00:20:58.535 Latency(us) 00:20:58.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.535 =================================================================================================================== 00:20:58.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.535 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2715533 00:20:58.795 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2715237 00:20:58.795 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2715237 ']' 00:20:58.795 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2715237 00:20:58.795 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:58.795 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2715237 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2715237' 00:20:58.796 killing process with pid 2715237 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2715237 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2715237 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.796 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.055 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2717370 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2717370 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2717370 ']' 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.056 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:59.056 [2024-10-01 16:45:50.511449] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:20:59.056 [2024-10-01 16:45:50.511491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.056 [2024-10-01 16:45:50.586549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.056 [2024-10-01 16:45:50.654004] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.056 [2024-10-01 16:45:50.654051] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.056 [2024-10-01 16:45:50.654060] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.056 [2024-10-01 16:45:50.654066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.056 [2024-10-01 16:45:50.654072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.056 [2024-10-01 16:45:50.654094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qEkjAqMy6V 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qEkjAqMy6V 00:20:59.316 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.316 [2024-10-01 16:45:50.998370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.838 [2024-10-01 16:45:51.439474] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.838 [2024-10-01 16:45:51.439790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.838 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.098 malloc0 00:21:00.098 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.358 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:21:00.619 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2717704 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2717704 /var/tmp/bdevperf.sock 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2717704 ']' 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.879 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.879 [2024-10-01 16:45:52.410100] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:00.879 [2024-10-01 16:45:52.410202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2717704 ] 00:21:00.879 [2024-10-01 16:45:52.471753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.879 [2024-10-01 16:45:52.537097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.138 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.138 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:01.138 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:21:01.138 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:01.397 [2024-10-01 16:45:53.001227] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.657 nvme0n1 00:21:01.657 16:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.657 Running I/O for 1 seconds... 00:21:02.856 877.00 IOPS, 3.43 MiB/s 00:21:02.856 Latency(us) 00:21:02.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.856 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:02.856 Verification LBA range: start 0x0 length 0x2000 00:21:02.856 nvme0n1 : 1.08 926.74 3.62 0.00 0.00 134095.47 4738.76 196003.05 00:21:02.856 =================================================================================================================== 00:21:02.856 Total : 926.74 3.62 0.00 0.00 134095.47 4738.76 196003.05 00:21:02.856 { 00:21:02.856 "results": [ 00:21:02.856 { 00:21:02.856 "job": "nvme0n1", 00:21:02.856 "core_mask": "0x2", 00:21:02.856 "workload": "verify", 00:21:02.856 "status": "finished", 00:21:02.856 "verify_range": { 00:21:02.856 "start": 0, 00:21:02.856 "length": 8192 00:21:02.856 }, 00:21:02.856 "queue_depth": 128, 00:21:02.856 "io_size": 4096, 00:21:02.856 "runtime": 1.084451, 00:21:02.856 "iops": 926.736201082391, 00:21:02.856 "mibps": 3.6200632854780896, 00:21:02.856 "io_failed": 0, 00:21:02.856 "io_timeout": 0, 00:21:02.856 "avg_latency_us": 134095.47188365864, 00:21:02.856 "min_latency_us": 4738.756923076923, 00:21:02.856 "max_latency_us": 196003.0523076923 00:21:02.856 } 00:21:02.856 ], 00:21:02.856 "core_count": 1 00:21:02.856 } 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2717704 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2717704 ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2717704 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2717704 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2717704' 00:21:02.856 killing process with pid 2717704 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2717704 00:21:02.856 Received shutdown signal, test time was about 1.000000 seconds 00:21:02.856 00:21:02.856 Latency(us) 00:21:02.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.856 =================================================================================================================== 00:21:02.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2717704 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2717370 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2717370 ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2717370 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2717370 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2717370' 00:21:02.856 killing process with pid 2717370 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2717370 00:21:02.856 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2717370 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2718041 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2718041 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2718041 ']' 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.117 16:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.117 [2024-10-01 16:45:54.737176] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:03.117 [2024-10-01 16:45:54.737227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.378 [2024-10-01 16:45:54.818888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.378 [2024-10-01 16:45:54.881948] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.378 [2024-10-01 16:45:54.881994] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.378 [2024-10-01 16:45:54.882002] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.378 [2024-10-01 16:45:54.882008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.378 [2024-10-01 16:45:54.882014] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.378 [2024-10-01 16:45:54.882040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.949 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.949 [2024-10-01 16:45:55.627208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.210 malloc0 00:21:04.210 [2024-10-01 16:45:55.664510] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.210 [2024-10-01 16:45:55.664834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2718338 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2718338 /var/tmp/bdevperf.sock 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2718338 ']' 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.210 16:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:04.210 [2024-10-01 16:45:55.745073] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:04.210 [2024-10-01 16:45:55.745133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718338 ] 00:21:04.210 [2024-10-01 16:45:55.800676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.210 [2024-10-01 16:45:55.866975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.149 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:05.149 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:05.149 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qEkjAqMy6V 00:21:05.149 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:05.409 [2024-10-01 16:45:56.984729] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.409 nvme0n1 00:21:05.669 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.669 Running I/O for 1 seconds... 00:21:06.608 950.00 IOPS, 3.71 MiB/s 00:21:06.608 Latency(us) 00:21:06.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.608 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:06.608 Verification LBA range: start 0x0 length 0x2000 00:21:06.608 nvme0n1 : 1.09 992.20 3.88 0.00 0.00 125105.45 6805.66 183904.10 00:21:06.608 =================================================================================================================== 00:21:06.608 Total : 992.20 3.88 0.00 0.00 125105.45 6805.66 183904.10 00:21:06.608 { 00:21:06.608 "results": [ 00:21:06.608 { 00:21:06.608 "job": "nvme0n1", 00:21:06.608 "core_mask": "0x2", 00:21:06.608 "workload": "verify", 00:21:06.608 "status": "finished", 00:21:06.608 "verify_range": { 00:21:06.608 "start": 0, 00:21:06.608 "length": 8192 00:21:06.608 }, 00:21:06.608 "queue_depth": 128, 00:21:06.608 "io_size": 4096, 00:21:06.608 "runtime": 1.087486, 00:21:06.608 "iops": 992.1966811526769, 00:21:06.608 "mibps": 3.875768285752644, 00:21:06.608 "io_failed": 0, 00:21:06.608 "io_timeout": 0, 00:21:06.608 "avg_latency_us": 125105.45342553647, 00:21:06.608 "min_latency_us": 6805.661538461539, 00:21:06.608 "max_latency_us": 183904.09846153847 00:21:06.608 } 00:21:06.608 ], 00:21:06.608 "core_count": 1 00:21:06.608 } 00:21:06.608 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:06.608 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.608 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.869 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.869 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:06.869 "subsystems": [ 00:21:06.869 { 00:21:06.869 "subsystem": "keyring", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.869 "method": "keyring_file_add_key", 00:21:06.869 "params": { 00:21:06.869 "name": "key0", 00:21:06.869 "path": "/tmp/tmp.qEkjAqMy6V" 00:21:06.869 } 00:21:06.869 } 00:21:06.869 ] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "iobuf", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.869 "method": "iobuf_set_options", 00:21:06.869 "params": { 00:21:06.869 "small_pool_count": 8192, 00:21:06.869 "large_pool_count": 1024, 00:21:06.869 "small_bufsize": 8192, 00:21:06.869 "large_bufsize": 135168 00:21:06.869 } 00:21:06.869 } 00:21:06.869 ] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "sock", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.869 "method": "sock_set_default_impl", 00:21:06.869 "params": { 00:21:06.869 "impl_name": "posix" 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "sock_impl_set_options", 00:21:06.869 "params": { 00:21:06.869 "impl_name": "ssl", 00:21:06.869 "recv_buf_size": 4096, 00:21:06.869 "send_buf_size": 4096, 00:21:06.869 "enable_recv_pipe": true, 00:21:06.869 "enable_quickack": false, 00:21:06.869 "enable_placement_id": 0, 00:21:06.869 "enable_zerocopy_send_server": true, 00:21:06.869 "enable_zerocopy_send_client": false, 00:21:06.869 "zerocopy_threshold": 0, 00:21:06.869 "tls_version": 0, 00:21:06.869 "enable_ktls": false 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "sock_impl_set_options", 00:21:06.869 "params": { 00:21:06.869 "impl_name": "posix", 00:21:06.869 "recv_buf_size": 2097152, 00:21:06.869 "send_buf_size": 2097152, 00:21:06.869 "enable_recv_pipe": true, 00:21:06.869 "enable_quickack": false, 00:21:06.869 "enable_placement_id": 0, 00:21:06.869 "enable_zerocopy_send_server": true, 00:21:06.869 "enable_zerocopy_send_client": false, 00:21:06.869 "zerocopy_threshold": 0, 00:21:06.869 "tls_version": 0, 00:21:06.869 "enable_ktls": false 00:21:06.869 } 00:21:06.869 } 00:21:06.869 ] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "vmd", 00:21:06.869 "config": [] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "accel", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.869 "method": "accel_set_options", 00:21:06.869 "params": { 00:21:06.869 "small_cache_size": 128, 00:21:06.869 "large_cache_size": 16, 00:21:06.869 "task_count": 2048, 00:21:06.869 "sequence_count": 2048, 00:21:06.869 "buf_count": 2048 00:21:06.869 } 00:21:06.869 } 00:21:06.869 ] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "bdev", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.869 "method": "bdev_set_options", 00:21:06.869 "params": { 00:21:06.869 "bdev_io_pool_size": 65535, 00:21:06.869 "bdev_io_cache_size": 256, 00:21:06.869 "bdev_auto_examine": true, 00:21:06.869 "iobuf_small_cache_size": 128, 00:21:06.869 "iobuf_large_cache_size": 16 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_raid_set_options", 00:21:06.869 "params": { 00:21:06.869 "process_window_size_kb": 1024, 00:21:06.869 "process_max_bandwidth_mb_sec": 0 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_iscsi_set_options", 00:21:06.869 "params": { 00:21:06.869 "timeout_sec": 30 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_nvme_set_options", 00:21:06.869 "params": { 00:21:06.869 "action_on_timeout": "none", 00:21:06.869 "timeout_us": 0, 00:21:06.869 "timeout_admin_us": 0, 00:21:06.869 "keep_alive_timeout_ms": 10000, 00:21:06.869 "arbitration_burst": 0, 00:21:06.869 "low_priority_weight": 0, 00:21:06.869 "medium_priority_weight": 0, 00:21:06.869 "high_priority_weight": 0, 00:21:06.869 "nvme_adminq_poll_period_us": 10000, 00:21:06.869 "nvme_ioq_poll_period_us": 0, 00:21:06.869 "io_queue_requests": 0, 00:21:06.869 "delay_cmd_submit": true, 00:21:06.869 "transport_retry_count": 4, 00:21:06.869 "bdev_retry_count": 3, 00:21:06.869 "transport_ack_timeout": 0, 00:21:06.869 "ctrlr_loss_timeout_sec": 0, 00:21:06.869 "reconnect_delay_sec": 0, 00:21:06.869 "fast_io_fail_timeout_sec": 0, 00:21:06.869 "disable_auto_failback": false, 00:21:06.869 "generate_uuids": false, 00:21:06.869 "transport_tos": 0, 00:21:06.869 "nvme_error_stat": false, 00:21:06.869 "rdma_srq_size": 0, 00:21:06.869 "io_path_stat": false, 00:21:06.869 "allow_accel_sequence": false, 00:21:06.869 "rdma_max_cq_size": 0, 00:21:06.869 "rdma_cm_event_timeout_ms": 0, 00:21:06.869 "dhchap_digests": [ 00:21:06.869 "sha256", 00:21:06.869 "sha384", 00:21:06.869 "sha512" 00:21:06.869 ], 00:21:06.869 "dhchap_dhgroups": [ 00:21:06.869 "null", 00:21:06.869 "ffdhe2048", 00:21:06.869 "ffdhe3072", 00:21:06.869 "ffdhe4096", 00:21:06.869 "ffdhe6144", 00:21:06.869 "ffdhe8192" 00:21:06.869 ] 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_nvme_set_hotplug", 00:21:06.869 "params": { 00:21:06.869 "period_us": 100000, 00:21:06.869 "enable": false 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_malloc_create", 00:21:06.869 "params": { 00:21:06.869 "name": "malloc0", 00:21:06.869 "num_blocks": 8192, 00:21:06.869 "block_size": 4096, 00:21:06.869 "physical_block_size": 4096, 00:21:06.869 "uuid": "812dfcd9-0ade-497c-bcbc-403d0c748927", 00:21:06.869 "optimal_io_boundary": 0, 00:21:06.869 "md_size": 0, 00:21:06.869 "dif_type": 0, 00:21:06.869 "dif_is_head_of_md": false, 00:21:06.869 "dif_pi_format": 0 00:21:06.869 } 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "method": "bdev_wait_for_examine" 00:21:06.869 } 00:21:06.869 ] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "nbd", 00:21:06.869 "config": [] 00:21:06.869 }, 00:21:06.869 { 00:21:06.869 "subsystem": "scheduler", 00:21:06.869 "config": [ 00:21:06.869 { 00:21:06.870 "method": "framework_set_scheduler", 00:21:06.870 "params": { 00:21:06.870 "name": "static" 00:21:06.870 } 00:21:06.870 } 00:21:06.870 ] 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "subsystem": "nvmf", 00:21:06.870 "config": [ 00:21:06.870 { 00:21:06.870 "method": "nvmf_set_config", 00:21:06.870 "params": { 00:21:06.870 "discovery_filter": "match_any", 00:21:06.870 "admin_cmd_passthru": { 00:21:06.870 "identify_ctrlr": false 00:21:06.870 }, 00:21:06.870 "dhchap_digests": [ 00:21:06.870 "sha256", 00:21:06.870 "sha384", 00:21:06.870 "sha512" 00:21:06.870 ], 00:21:06.870 "dhchap_dhgroups": [ 00:21:06.870 "null", 00:21:06.870 "ffdhe2048", 00:21:06.870 "ffdhe3072", 00:21:06.870 "ffdhe4096", 00:21:06.870 "ffdhe6144", 00:21:06.870 "ffdhe8192" 00:21:06.870 ] 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_set_max_subsystems", 00:21:06.870 "params": { 00:21:06.870 "max_subsystems": 1024 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_set_crdt", 00:21:06.870 "params": { 00:21:06.870 "crdt1": 0, 00:21:06.870 "crdt2": 0, 00:21:06.870 "crdt3": 0 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_create_transport", 00:21:06.870 "params": { 00:21:06.870 "trtype": "TCP", 00:21:06.870 "max_queue_depth": 128, 00:21:06.870 "max_io_qpairs_per_ctrlr": 127, 00:21:06.870 "in_capsule_data_size": 4096, 00:21:06.870 "max_io_size": 131072, 00:21:06.870 "io_unit_size": 131072, 00:21:06.870 "max_aq_depth": 128, 00:21:06.870 "num_shared_buffers": 511, 00:21:06.870 "buf_cache_size": 4294967295, 00:21:06.870 "dif_insert_or_strip": false, 00:21:06.870 "zcopy": false, 00:21:06.870 "c2h_success": false, 00:21:06.870 "sock_priority": 0, 00:21:06.870 "abort_timeout_sec": 1, 00:21:06.870 "ack_timeout": 0, 00:21:06.870 "data_wr_pool_size": 0 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_create_subsystem", 00:21:06.870 "params": { 00:21:06.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.870 "allow_any_host": false, 00:21:06.870 "serial_number": "00000000000000000000", 00:21:06.870 "model_number": "SPDK bdev Controller", 00:21:06.870 "max_namespaces": 32, 00:21:06.870 "min_cntlid": 1, 00:21:06.870 "max_cntlid": 65519, 00:21:06.870 "ana_reporting": false 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_subsystem_add_host", 00:21:06.870 "params": { 00:21:06.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.870 "host": "nqn.2016-06.io.spdk:host1", 00:21:06.870 "psk": "key0" 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_subsystem_add_ns", 00:21:06.870 "params": { 00:21:06.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.870 "namespace": { 00:21:06.870 "nsid": 1, 00:21:06.870 "bdev_name": "malloc0", 00:21:06.870 "nguid": "812DFCD90ADE497CBCBC403D0C748927", 00:21:06.870 "uuid": "812dfcd9-0ade-497c-bcbc-403d0c748927", 00:21:06.870 "no_auto_visible": false 00:21:06.870 } 00:21:06.870 } 00:21:06.870 }, 00:21:06.870 { 00:21:06.870 "method": "nvmf_subsystem_add_listener", 00:21:06.870 "params": { 00:21:06.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.870 "listen_address": { 00:21:06.870 "trtype": "TCP", 00:21:06.870 "adrfam": "IPv4", 00:21:06.870 "traddr": "10.0.0.2", 00:21:06.870 "trsvcid": "4420" 00:21:06.870 }, 00:21:06.870 "secure_channel": false, 00:21:06.870 "sock_impl": "ssl" 00:21:06.870 } 00:21:06.870 } 00:21:06.870 ] 00:21:06.870 } 00:21:06.870 ] 00:21:06.870 }' 00:21:06.870 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:07.131 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:07.131 "subsystems": [ 00:21:07.131 { 00:21:07.131 "subsystem": "keyring", 00:21:07.131 "config": [ 00:21:07.131 { 00:21:07.131 "method": "keyring_file_add_key", 00:21:07.131 "params": { 00:21:07.131 "name": "key0", 00:21:07.131 "path": "/tmp/tmp.qEkjAqMy6V" 00:21:07.131 } 00:21:07.131 } 00:21:07.131 ] 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "subsystem": "iobuf", 00:21:07.131 "config": [ 00:21:07.131 { 00:21:07.131 "method": "iobuf_set_options", 00:21:07.131 "params": { 00:21:07.131 "small_pool_count": 8192, 00:21:07.131 "large_pool_count": 1024, 00:21:07.131 "small_bufsize": 8192, 00:21:07.131 "large_bufsize": 135168 00:21:07.131 } 00:21:07.131 } 00:21:07.131 ] 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "subsystem": "sock", 00:21:07.131 "config": [ 00:21:07.131 { 00:21:07.131 "method": "sock_set_default_impl", 00:21:07.131 "params": { 00:21:07.131 "impl_name": "posix" 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "sock_impl_set_options", 00:21:07.131 "params": { 00:21:07.131 "impl_name": "ssl", 00:21:07.131 "recv_buf_size": 4096, 00:21:07.131 "send_buf_size": 4096, 00:21:07.131 "enable_recv_pipe": true, 00:21:07.131 "enable_quickack": false, 00:21:07.131 "enable_placement_id": 0, 00:21:07.131 "enable_zerocopy_send_server": true, 00:21:07.131 "enable_zerocopy_send_client": false, 00:21:07.131 "zerocopy_threshold": 0, 00:21:07.131 "tls_version": 0, 00:21:07.131 "enable_ktls": false 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "sock_impl_set_options", 00:21:07.131 "params": { 00:21:07.131 "impl_name": "posix", 00:21:07.131 "recv_buf_size": 2097152, 00:21:07.131 "send_buf_size": 2097152, 00:21:07.131 "enable_recv_pipe": true, 00:21:07.131 "enable_quickack": false, 00:21:07.131 "enable_placement_id": 0, 00:21:07.131 "enable_zerocopy_send_server": true, 00:21:07.131 "enable_zerocopy_send_client": false, 00:21:07.131 "zerocopy_threshold": 0, 00:21:07.131 "tls_version": 0, 00:21:07.131 "enable_ktls": false 00:21:07.131 } 00:21:07.131 } 00:21:07.131 ] 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "subsystem": "vmd", 00:21:07.131 "config": [] 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "subsystem": "accel", 00:21:07.131 "config": [ 00:21:07.131 { 00:21:07.131 "method": "accel_set_options", 00:21:07.131 "params": { 00:21:07.131 "small_cache_size": 128, 00:21:07.131 "large_cache_size": 16, 00:21:07.131 "task_count": 2048, 00:21:07.131 "sequence_count": 2048, 00:21:07.131 "buf_count": 2048 00:21:07.131 } 00:21:07.131 } 00:21:07.131 ] 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "subsystem": "bdev", 00:21:07.131 "config": [ 00:21:07.131 { 00:21:07.131 "method": "bdev_set_options", 00:21:07.131 "params": { 00:21:07.131 "bdev_io_pool_size": 65535, 00:21:07.131 "bdev_io_cache_size": 256, 00:21:07.131 "bdev_auto_examine": true, 00:21:07.131 "iobuf_small_cache_size": 128, 00:21:07.131 "iobuf_large_cache_size": 16 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "bdev_raid_set_options", 00:21:07.131 "params": { 00:21:07.131 "process_window_size_kb": 1024, 00:21:07.131 "process_max_bandwidth_mb_sec": 0 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "bdev_iscsi_set_options", 00:21:07.131 "params": { 00:21:07.131 "timeout_sec": 30 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "bdev_nvme_set_options", 00:21:07.131 "params": { 00:21:07.131 "action_on_timeout": "none", 00:21:07.131 "timeout_us": 0, 00:21:07.131 "timeout_admin_us": 0, 00:21:07.131 "keep_alive_timeout_ms": 10000, 00:21:07.131 "arbitration_burst": 0, 00:21:07.131 "low_priority_weight": 0, 00:21:07.131 "medium_priority_weight": 0, 00:21:07.131 "high_priority_weight": 0, 00:21:07.131 "nvme_adminq_poll_period_us": 10000, 00:21:07.131 "nvme_ioq_poll_period_us": 0, 00:21:07.131 "io_queue_requests": 512, 00:21:07.131 "delay_cmd_submit": true, 00:21:07.131 "transport_retry_count": 4, 00:21:07.131 "bdev_retry_count": 3, 00:21:07.131 "transport_ack_timeout": 0, 00:21:07.131 "ctrlr_loss_timeout_sec": 0, 00:21:07.131 "reconnect_delay_sec": 0, 00:21:07.131 "fast_io_fail_timeout_sec": 0, 00:21:07.131 "disable_auto_failback": false, 00:21:07.131 "generate_uuids": false, 00:21:07.131 "transport_tos": 0, 00:21:07.131 "nvme_error_stat": false, 00:21:07.131 "rdma_srq_size": 0, 00:21:07.131 "io_path_stat": false, 00:21:07.131 "allow_accel_sequence": false, 00:21:07.131 "rdma_max_cq_size": 0, 00:21:07.131 "rdma_cm_event_timeout_ms": 0, 00:21:07.131 "dhchap_digests": [ 00:21:07.131 "sha256", 00:21:07.131 "sha384", 00:21:07.131 "sha512" 00:21:07.131 ], 00:21:07.131 "dhchap_dhgroups": [ 00:21:07.131 "null", 00:21:07.131 "ffdhe2048", 00:21:07.131 "ffdhe3072", 00:21:07.131 "ffdhe4096", 00:21:07.131 "ffdhe6144", 00:21:07.131 "ffdhe8192" 00:21:07.131 ] 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "bdev_nvme_attach_controller", 00:21:07.131 "params": { 00:21:07.131 "name": "nvme0", 00:21:07.131 "trtype": "TCP", 00:21:07.131 "adrfam": "IPv4", 00:21:07.131 "traddr": "10.0.0.2", 00:21:07.131 "trsvcid": "4420", 00:21:07.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.131 "prchk_reftag": false, 00:21:07.131 "prchk_guard": false, 00:21:07.131 "ctrlr_loss_timeout_sec": 0, 00:21:07.131 "reconnect_delay_sec": 0, 00:21:07.131 "fast_io_fail_timeout_sec": 0, 00:21:07.131 "psk": "key0", 00:21:07.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.131 "hdgst": false, 00:21:07.131 "ddgst": false 00:21:07.131 } 00:21:07.131 }, 00:21:07.131 { 00:21:07.131 "method": "bdev_nvme_set_hotplug", 00:21:07.131 "params": { 00:21:07.131 "period_us": 100000, 00:21:07.131 "enable": false 00:21:07.131 } 00:21:07.131 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_enable_histogram", 00:21:07.132 "params": { 00:21:07.132 "name": "nvme0n1", 00:21:07.132 "enable": true 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_wait_for_examine" 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "nbd", 00:21:07.132 "config": [] 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }' 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2718338 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2718338 ']' 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2718338 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718338 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718338' 00:21:07.132 killing process with pid 2718338 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2718338 00:21:07.132 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.132 00:21:07.132 Latency(us) 00:21:07.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.132 =================================================================================================================== 00:21:07.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.132 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2718338 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2718041 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2718041 ']' 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2718041 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718041 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718041' 00:21:07.392 killing process with pid 2718041 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2718041 00:21:07.392 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2718041 00:21:07.653 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:07.653 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:07.653 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.653 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:07.653 "subsystems": [ 00:21:07.653 { 00:21:07.653 "subsystem": "keyring", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "keyring_file_add_key", 00:21:07.653 "params": { 00:21:07.653 "name": "key0", 00:21:07.653 "path": "/tmp/tmp.qEkjAqMy6V" 00:21:07.653 } 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "iobuf", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "iobuf_set_options", 00:21:07.653 "params": { 00:21:07.653 "small_pool_count": 8192, 00:21:07.653 "large_pool_count": 1024, 00:21:07.653 "small_bufsize": 8192, 00:21:07.653 "large_bufsize": 135168 00:21:07.653 } 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "sock", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "sock_set_default_impl", 00:21:07.653 "params": { 00:21:07.653 "impl_name": "posix" 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "sock_impl_set_options", 00:21:07.653 "params": { 00:21:07.653 "impl_name": "ssl", 00:21:07.653 "recv_buf_size": 4096, 00:21:07.653 "send_buf_size": 4096, 00:21:07.653 "enable_recv_pipe": true, 00:21:07.653 "enable_quickack": false, 00:21:07.653 "enable_placement_id": 0, 00:21:07.653 "enable_zerocopy_send_server": true, 00:21:07.653 "enable_zerocopy_send_client": false, 00:21:07.653 "zerocopy_threshold": 0, 00:21:07.653 "tls_version": 0, 00:21:07.653 "enable_ktls": false 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "sock_impl_set_options", 00:21:07.653 "params": { 00:21:07.653 "impl_name": "posix", 00:21:07.653 "recv_buf_size": 2097152, 00:21:07.653 "send_buf_size": 2097152, 00:21:07.653 "enable_recv_pipe": true, 00:21:07.653 "enable_quickack": false, 00:21:07.653 "enable_placement_id": 0, 00:21:07.653 "enable_zerocopy_send_server": true, 00:21:07.653 "enable_zerocopy_send_client": false, 00:21:07.653 "zerocopy_threshold": 0, 00:21:07.653 "tls_version": 0, 00:21:07.653 "enable_ktls": false 00:21:07.653 } 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "vmd", 00:21:07.653 "config": [] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "accel", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "accel_set_options", 00:21:07.653 "params": { 00:21:07.653 "small_cache_size": 128, 00:21:07.653 "large_cache_size": 16, 00:21:07.653 "task_count": 2048, 00:21:07.653 "sequence_count": 2048, 00:21:07.653 "buf_count": 2048 00:21:07.653 } 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "bdev", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "bdev_set_options", 00:21:07.653 "params": { 00:21:07.653 "bdev_io_pool_size": 65535, 00:21:07.653 "bdev_io_cache_size": 256, 00:21:07.653 "bdev_auto_examine": true, 00:21:07.653 "iobuf_small_cache_size": 128, 00:21:07.653 "iobuf_large_cache_size": 16 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_raid_set_options", 00:21:07.653 "params": { 00:21:07.653 "process_window_size_kb": 1024, 00:21:07.653 "process_max_bandwidth_mb_sec": 0 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_iscsi_set_options", 00:21:07.653 "params": { 00:21:07.653 "timeout_sec": 30 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_nvme_set_options", 00:21:07.653 "params": { 00:21:07.653 "action_on_timeout": "none", 00:21:07.653 "timeout_us": 0, 00:21:07.653 "timeout_admin_us": 0, 00:21:07.653 "keep_alive_timeout_ms": 10000, 00:21:07.653 "arbitration_burst": 0, 00:21:07.653 "low_priority_weight": 0, 00:21:07.653 "medium_priority_weight": 0, 00:21:07.653 "high_priority_weight": 0, 00:21:07.653 "nvme_adminq_poll_period_us": 10000, 00:21:07.653 "nvme_ioq_poll_period_us": 0, 00:21:07.653 "io_queue_requests": 0, 00:21:07.653 "delay_cmd_submit": true, 00:21:07.653 "transport_retry_count": 4, 00:21:07.653 "bdev_retry_count": 3, 00:21:07.653 "transport_ack_timeout": 0, 00:21:07.653 "ctrlr_loss_timeout_sec": 0, 00:21:07.653 "reconnect_delay_sec": 0, 00:21:07.653 "fast_io_fail_timeout_sec": 0, 00:21:07.653 "disable_auto_failback": false, 00:21:07.653 "generate_uuids": false, 00:21:07.653 "transport_tos": 0, 00:21:07.653 "nvme_error_stat": false, 00:21:07.653 "rdma_srq_size": 0, 00:21:07.653 "io_path_stat": false, 00:21:07.653 "allow_accel_sequence": false, 00:21:07.653 "rdma_max_cq_size": 0, 00:21:07.653 "rdma_cm_event_timeout_ms": 0, 00:21:07.653 "dhchap_digests": [ 00:21:07.653 "sha256", 00:21:07.653 "sha384", 00:21:07.653 "sha512" 00:21:07.653 ], 00:21:07.653 "dhchap_dhgroups": [ 00:21:07.653 "null", 00:21:07.653 "ffdhe2048", 00:21:07.653 "ffdhe3072", 00:21:07.653 "ffdhe4096", 00:21:07.653 "ffdhe6144", 00:21:07.653 "ffdhe8192" 00:21:07.653 ] 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_nvme_set_hotplug", 00:21:07.653 "params": { 00:21:07.653 "period_us": 100000, 00:21:07.653 "enable": false 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_malloc_create", 00:21:07.653 "params": { 00:21:07.653 "name": "malloc0", 00:21:07.653 "num_blocks": 8192, 00:21:07.653 "block_size": 4096, 00:21:07.653 "physical_block_size": 4096, 00:21:07.653 "uuid": "812dfcd9-0ade-497c-bcbc-403d0c748927", 00:21:07.653 "optimal_io_boundary": 0, 00:21:07.653 "md_size": 0, 00:21:07.653 "dif_type": 0, 00:21:07.653 "dif_is_head_of_md": false, 00:21:07.653 "dif_pi_format": 0 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "bdev_wait_for_examine" 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "nbd", 00:21:07.653 "config": [] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "scheduler", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "framework_set_scheduler", 00:21:07.653 "params": { 00:21:07.653 "name": "static" 00:21:07.653 } 00:21:07.653 } 00:21:07.653 ] 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "subsystem": "nvmf", 00:21:07.653 "config": [ 00:21:07.653 { 00:21:07.653 "method": "nvmf_set_config", 00:21:07.653 "params": { 00:21:07.653 "discovery_filter": "match_any", 00:21:07.653 "admin_cmd_passthru": { 00:21:07.653 "identify_ctrlr": false 00:21:07.653 }, 00:21:07.653 "dhchap_digests": [ 00:21:07.653 "sha256", 00:21:07.653 "sha384", 00:21:07.653 "sha512" 00:21:07.653 ], 00:21:07.653 "dhchap_dhgroups": [ 00:21:07.653 "null", 00:21:07.653 "ffdhe2048", 00:21:07.653 "ffdhe3072", 00:21:07.653 "ffdhe4096", 00:21:07.653 "ffdhe6144", 00:21:07.653 "ffdhe8192" 00:21:07.653 ] 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "nvmf_set_max_subsystems", 00:21:07.653 "params": { 00:21:07.653 "max_subsystems": 1024 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "nvmf_set_crdt", 00:21:07.653 "params": { 00:21:07.653 "crdt1": 0, 00:21:07.653 "crdt2": 0, 00:21:07.653 "crdt3": 0 00:21:07.653 } 00:21:07.653 }, 00:21:07.653 { 00:21:07.653 "method": "nvmf_create_transport", 00:21:07.653 "params": { 00:21:07.653 "trtype": "TCP", 00:21:07.653 "max_queue_depth": 128, 00:21:07.653 "max_io_qpairs_per_ctrlr": 127, 00:21:07.653 "in_capsule_data_size": 4096, 00:21:07.653 "max_io_size": 131072, 00:21:07.653 "io_unit_size": 131072, 00:21:07.653 "max_aq_depth": 128, 00:21:07.653 "num_shared_buffers": 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.653 511, 00:21:07.653 "buf_cache_size": 4294967295, 00:21:07.654 "dif_insert_or_strip": false, 00:21:07.654 "zcopy": false, 00:21:07.654 "c2h_success": false, 00:21:07.654 "sock_priority": 0, 00:21:07.654 "abort_timeout_sec": 1, 00:21:07.654 "ack_timeout": 0, 00:21:07.654 "data_wr_pool_size": 0 00:21:07.654 } 00:21:07.654 }, 00:21:07.654 { 00:21:07.654 "method": "nvmf_create_subsystem", 00:21:07.654 "params": { 00:21:07.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.654 "allow_any_host": false, 00:21:07.654 "serial_number": "00000000000000000000", 00:21:07.654 "model_number": "SPDK bdev Controller", 00:21:07.654 "max_namespaces": 32, 00:21:07.654 "min_cntlid": 1, 00:21:07.654 "max_cntlid": 65519, 00:21:07.654 "ana_reporting": false 00:21:07.654 } 00:21:07.654 }, 00:21:07.654 { 00:21:07.654 "method": "nvmf_subsystem_add_host", 00:21:07.654 "params": { 00:21:07.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.654 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.654 "psk": "key0" 00:21:07.654 } 00:21:07.654 }, 00:21:07.654 { 00:21:07.654 "method": "nvmf_subsystem_add_ns", 00:21:07.654 "params": { 00:21:07.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.654 "namespace": { 00:21:07.654 "nsid": 1, 00:21:07.654 "bdev_name": "malloc0", 00:21:07.654 "nguid": "812DFCD90ADE497CBCBC403D0C748927", 00:21:07.654 "uuid": "812dfcd9-0ade-497c-bcbc-403d0c748927", 00:21:07.654 "no_auto_visible": false 00:21:07.654 } 00:21:07.654 } 00:21:07.654 }, 00:21:07.654 { 00:21:07.654 "method": "nvmf_subsystem_add_listener", 00:21:07.654 "params": { 00:21:07.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.654 "listen_address": { 00:21:07.654 "trtype": "TCP", 00:21:07.654 "adrfam": "IPv4", 00:21:07.654 "traddr": "10.0.0.2", 00:21:07.654 "trsvcid": "4420" 00:21:07.654 }, 00:21:07.654 "secure_channel": false, 00:21:07.654 "sock_impl": "ssl" 00:21:07.654 } 00:21:07.654 } 00:21:07.654 ] 00:21:07.654 } 00:21:07.654 ] 00:21:07.654 }' 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2718966 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2718966 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2718966 ']' 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.654 16:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:07.654 [2024-10-01 16:45:59.184196] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:07.654 [2024-10-01 16:45:59.184249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.654 [2024-10-01 16:45:59.264678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.654 [2024-10-01 16:45:59.324781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.654 [2024-10-01 16:45:59.324818] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.654 [2024-10-01 16:45:59.324825] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.654 [2024-10-01 16:45:59.324831] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.654 [2024-10-01 16:45:59.324837] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.654 [2024-10-01 16:45:59.324886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.914 [2024-10-01 16:45:59.529203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.914 [2024-10-01 16:45:59.561223] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.914 [2024-10-01 16:45:59.561428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.484 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2719012 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2719012 /var/tmp/bdevperf.sock 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2719012 ']' 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.485 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:08.485 "subsystems": [ 00:21:08.485 { 00:21:08.485 "subsystem": "keyring", 00:21:08.485 "config": [ 00:21:08.485 { 00:21:08.485 "method": "keyring_file_add_key", 00:21:08.485 "params": { 00:21:08.485 "name": "key0", 00:21:08.485 "path": "/tmp/tmp.qEkjAqMy6V" 00:21:08.485 } 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "iobuf", 00:21:08.485 "config": [ 00:21:08.485 { 00:21:08.485 "method": "iobuf_set_options", 00:21:08.485 "params": { 00:21:08.485 "small_pool_count": 8192, 00:21:08.485 "large_pool_count": 1024, 00:21:08.485 "small_bufsize": 8192, 00:21:08.485 "large_bufsize": 135168 00:21:08.485 } 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "sock", 00:21:08.485 "config": [ 00:21:08.485 { 00:21:08.485 "method": "sock_set_default_impl", 00:21:08.485 "params": { 00:21:08.485 "impl_name": "posix" 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "sock_impl_set_options", 00:21:08.485 "params": { 00:21:08.485 "impl_name": "ssl", 00:21:08.485 "recv_buf_size": 4096, 00:21:08.485 "send_buf_size": 4096, 00:21:08.485 "enable_recv_pipe": true, 00:21:08.485 "enable_quickack": false, 00:21:08.485 "enable_placement_id": 0, 00:21:08.485 "enable_zerocopy_send_server": true, 00:21:08.485 "enable_zerocopy_send_client": false, 00:21:08.485 "zerocopy_threshold": 0, 00:21:08.485 "tls_version": 0, 00:21:08.485 "enable_ktls": false 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "sock_impl_set_options", 00:21:08.485 "params": { 00:21:08.485 "impl_name": "posix", 00:21:08.485 "recv_buf_size": 2097152, 00:21:08.485 "send_buf_size": 2097152, 00:21:08.485 "enable_recv_pipe": true, 00:21:08.485 "enable_quickack": false, 00:21:08.485 "enable_placement_id": 0, 00:21:08.485 "enable_zerocopy_send_server": true, 00:21:08.485 "enable_zerocopy_send_client": false, 00:21:08.485 "zerocopy_threshold": 0, 00:21:08.485 "tls_version": 0, 00:21:08.485 "enable_ktls": false 00:21:08.485 } 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "vmd", 00:21:08.485 "config": [] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "accel", 00:21:08.485 "config": [ 00:21:08.485 { 00:21:08.485 "method": "accel_set_options", 00:21:08.485 "params": { 00:21:08.485 "small_cache_size": 128, 00:21:08.485 "large_cache_size": 16, 00:21:08.485 "task_count": 2048, 00:21:08.485 "sequence_count": 2048, 00:21:08.485 "buf_count": 2048 00:21:08.485 } 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "bdev", 00:21:08.485 "config": [ 00:21:08.485 { 00:21:08.485 "method": "bdev_set_options", 00:21:08.485 "params": { 00:21:08.485 "bdev_io_pool_size": 65535, 00:21:08.485 "bdev_io_cache_size": 256, 00:21:08.485 "bdev_auto_examine": true, 00:21:08.485 "iobuf_small_cache_size": 128, 00:21:08.485 "iobuf_large_cache_size": 16 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_raid_set_options", 00:21:08.485 "params": { 00:21:08.485 "process_window_size_kb": 1024, 00:21:08.485 "process_max_bandwidth_mb_sec": 0 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_iscsi_set_options", 00:21:08.485 "params": { 00:21:08.485 "timeout_sec": 30 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_nvme_set_options", 00:21:08.485 "params": { 00:21:08.485 "action_on_timeout": "none", 00:21:08.485 "timeout_us": 0, 00:21:08.485 "timeout_admin_us": 0, 00:21:08.485 "keep_alive_timeout_ms": 10000, 00:21:08.485 "arbitration_burst": 0, 00:21:08.485 "low_priority_weight": 0, 00:21:08.485 "medium_priority_weight": 0, 00:21:08.485 "high_priority_weight": 0, 00:21:08.485 "nvme_adminq_poll_period_us": 10000, 00:21:08.485 "nvme_ioq_poll_period_us": 0, 00:21:08.485 "io_queue_requests": 512, 00:21:08.485 "delay_cmd_submit": true, 00:21:08.485 "transport_retry_count": 4, 00:21:08.485 "bdev_retry_count": 3, 00:21:08.485 "transport_ack_timeout": 0, 00:21:08.485 "ctrlr_loss_timeout_sec": 0, 00:21:08.485 "reconnect_delay_sec": 0, 00:21:08.485 "fast_io_fail_timeout_sec": 0, 00:21:08.485 "disable_auto_failback": false, 00:21:08.485 "generate_uuids": false, 00:21:08.485 "transport_tos": 0, 00:21:08.485 "nvme_error_stat": false, 00:21:08.485 "rdma_srq_size": 0, 00:21:08.485 "io_path_stat": false, 00:21:08.485 "allow_accel_sequence": false, 00:21:08.485 "rdma_max_cq_size": 0, 00:21:08.485 "rdma_cm_event_timeout_ms": 0, 00:21:08.485 "dhchap_digests": [ 00:21:08.485 "sha256", 00:21:08.485 "sha384", 00:21:08.485 "sha512" 00:21:08.485 ], 00:21:08.485 "dhchap_dhgroups": [ 00:21:08.485 "null", 00:21:08.485 "ffdhe2048", 00:21:08.485 "ffdhe3072", 00:21:08.485 "ffdhe4096", 00:21:08.485 "ffdhe6144", 00:21:08.485 "ffdhe8192" 00:21:08.485 ] 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_nvme_attach_controller", 00:21:08.485 "params": { 00:21:08.485 "name": "nvme0", 00:21:08.485 "trtype": "TCP", 00:21:08.485 "adrfam": "IPv4", 00:21:08.485 "traddr": "10.0.0.2", 00:21:08.485 "trsvcid": "4420", 00:21:08.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.485 "prchk_reftag": false, 00:21:08.485 "prchk_guard": false, 00:21:08.485 "ctrlr_loss_timeout_sec": 0, 00:21:08.485 "reconnect_delay_sec": 0, 00:21:08.485 "fast_io_fail_timeout_sec": 0, 00:21:08.485 "psk": "key0", 00:21:08.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.485 "hdgst": false, 00:21:08.485 "ddgst": false 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_nvme_set_hotplug", 00:21:08.485 "params": { 00:21:08.485 "period_us": 100000, 00:21:08.485 "enable": false 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_enable_histogram", 00:21:08.485 "params": { 00:21:08.485 "name": "nvme0n1", 00:21:08.485 "enable": true 00:21:08.485 } 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "method": "bdev_wait_for_examine" 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }, 00:21:08.485 { 00:21:08.485 "subsystem": "nbd", 00:21:08.485 "config": [] 00:21:08.485 } 00:21:08.485 ] 00:21:08.485 }' 00:21:08.485 [2024-10-01 16:46:00.140712] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:08.486 [2024-10-01 16:46:00.140764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719012 ] 00:21:08.746 [2024-10-01 16:46:00.191604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.746 [2024-10-01 16:46:00.246161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.746 [2024-10-01 16:46:00.380396] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.316 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.316 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:09.316 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.316 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:09.576 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.576 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.835 Running I/O for 1 seconds... 00:21:10.777 954.00 IOPS, 3.73 MiB/s 00:21:10.777 Latency(us) 00:21:10.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.777 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.777 Verification LBA range: start 0x0 length 0x2000 00:21:10.777 nvme0n1 : 1.07 1009.27 3.94 0.00 0.00 123689.83 4511.90 191970.07 00:21:10.777 =================================================================================================================== 00:21:10.777 Total : 1009.27 3.94 0.00 0.00 123689.83 4511.90 191970.07 00:21:10.777 { 00:21:10.777 "results": [ 00:21:10.777 { 00:21:10.777 "job": "nvme0n1", 00:21:10.777 "core_mask": "0x2", 00:21:10.777 "workload": "verify", 00:21:10.777 "status": "finished", 00:21:10.777 "verify_range": { 00:21:10.777 "start": 0, 00:21:10.777 "length": 8192 00:21:10.777 }, 00:21:10.777 "queue_depth": 128, 00:21:10.777 "io_size": 4096, 00:21:10.777 "runtime": 1.07305, 00:21:10.777 "iops": 1009.272634080425, 00:21:10.777 "mibps": 3.94247122687666, 00:21:10.777 "io_failed": 0, 00:21:10.777 "io_timeout": 0, 00:21:10.777 "avg_latency_us": 123689.8320619362, 00:21:10.777 "min_latency_us": 4511.901538461539, 00:21:10.777 "max_latency_us": 191970.0676923077 00:21:10.777 } 00:21:10.777 ], 00:21:10.777 "core_count": 1 00:21:10.777 } 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:10.777 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:10.777 nvmf_trace.0 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2719012 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2719012 ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2719012 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2719012 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2719012' 00:21:11.038 killing process with pid 2719012 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2719012 00:21:11.038 Received shutdown signal, test time was about 1.000000 seconds 00:21:11.038 00:21:11.038 Latency(us) 00:21:11.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.038 =================================================================================================================== 00:21:11.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2719012 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.038 rmmod nvme_tcp 00:21:11.038 rmmod nvme_fabrics 00:21:11.038 rmmod nvme_keyring 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 2718966 ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 2718966 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2718966 ']' 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2718966 00:21:11.038 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2718966 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2718966' 00:21:11.298 killing process with pid 2718966 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2718966 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2718966 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.298 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.299 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.299 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.299 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.841 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.841 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7a9N5QsfNx /tmp/tmp.YAo7LE1ZKx /tmp/tmp.qEkjAqMy6V 00:21:13.841 00:21:13.841 real 1m23.474s 00:21:13.841 user 2m15.272s 00:21:13.841 sys 0m23.848s 00:21:13.841 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.841 16:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.841 ************************************ 00:21:13.841 END TEST nvmf_tls 00:21:13.841 ************************************ 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.841 ************************************ 00:21:13.841 START TEST nvmf_fips 00:21:13.841 ************************************ 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.841 * Looking for test storage... 00:21:13.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:13.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.841 --rc genhtml_branch_coverage=1 00:21:13.841 --rc genhtml_function_coverage=1 00:21:13.841 --rc genhtml_legend=1 00:21:13.841 --rc geninfo_all_blocks=1 00:21:13.841 --rc geninfo_unexecuted_blocks=1 00:21:13.841 00:21:13.841 ' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:13.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.841 --rc genhtml_branch_coverage=1 00:21:13.841 --rc genhtml_function_coverage=1 00:21:13.841 --rc genhtml_legend=1 00:21:13.841 --rc geninfo_all_blocks=1 00:21:13.841 --rc geninfo_unexecuted_blocks=1 00:21:13.841 00:21:13.841 ' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:13.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.841 --rc genhtml_branch_coverage=1 00:21:13.841 --rc genhtml_function_coverage=1 00:21:13.841 --rc genhtml_legend=1 00:21:13.841 --rc geninfo_all_blocks=1 00:21:13.841 --rc geninfo_unexecuted_blocks=1 00:21:13.841 00:21:13.841 ' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:13.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.841 --rc genhtml_branch_coverage=1 00:21:13.841 --rc genhtml_function_coverage=1 00:21:13.841 --rc genhtml_legend=1 00:21:13.841 --rc geninfo_all_blocks=1 00:21:13.841 --rc geninfo_unexecuted_blocks=1 00:21:13.841 00:21:13.841 ' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.841 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:13.842 Error setting digest 00:21:13.842 40F23D065B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:13.842 40F23D065B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.842 16:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:20.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:20.429 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:20.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:20.430 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:20.430 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.430 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:20.691 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:20.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:21:20.692 00:21:20.692 --- 10.0.0.2 ping statistics --- 00:21:20.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.692 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:21:20.692 00:21:20.692 --- 10.0.0.1 ping statistics --- 00:21:20.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.692 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=2723539 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 2723539 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2723539 ']' 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.692 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.009 [2024-10-01 16:46:12.410408] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:21.009 [2024-10-01 16:46:12.410482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.009 [2024-10-01 16:46:12.472449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.009 [2024-10-01 16:46:12.536472] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.009 [2024-10-01 16:46:12.536509] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.009 [2024-10-01 16:46:12.536515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.009 [2024-10-01 16:46:12.536521] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.009 [2024-10-01 16:46:12.536525] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.009 [2024-10-01 16:46:12.536547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.sa0 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.sa0 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.sa0 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.sa0 00:21:21.009 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:21.281 [2024-10-01 16:46:12.868480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.281 [2024-10-01 16:46:12.884500] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.281 [2024-10-01 16:46:12.884679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.281 malloc0 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2723570 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2723570 /var/tmp/bdevperf.sock 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2723570 ']' 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.281 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.560 [2024-10-01 16:46:13.010916] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:21.560 [2024-10-01 16:46:13.010968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723570 ] 00:21:21.561 [2024-10-01 16:46:13.062721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.561 [2024-10-01 16:46:13.115753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.561 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.561 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:21.561 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.sa0 00:21:21.846 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:22.144 [2024-10-01 16:46:13.596663] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.144 TLSTESTn1 00:21:22.144 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.144 Running I/O for 10 seconds... 00:21:32.419 1785.00 IOPS, 6.97 MiB/s 2100.00 IOPS, 8.20 MiB/s 3123.67 IOPS, 12.20 MiB/s 3509.50 IOPS, 13.71 MiB/s 3432.20 IOPS, 13.41 MiB/s 3200.17 IOPS, 12.50 MiB/s 3465.29 IOPS, 13.54 MiB/s 3464.25 IOPS, 13.53 MiB/s 3253.00 IOPS, 12.71 MiB/s 3089.00 IOPS, 12.07 MiB/s 00:21:32.419 Latency(us) 00:21:32.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.419 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.419 Verification LBA range: start 0x0 length 0x2000 00:21:32.419 TLSTESTn1 : 10.09 3072.73 12.00 0.00 0.00 41482.19 6251.13 166158.97 00:21:32.419 =================================================================================================================== 00:21:32.419 Total : 3072.73 12.00 0.00 0.00 41482.19 6251.13 166158.97 00:21:32.419 { 00:21:32.419 "results": [ 00:21:32.419 { 00:21:32.419 "job": "TLSTESTn1", 00:21:32.419 "core_mask": "0x4", 00:21:32.419 "workload": "verify", 00:21:32.419 "status": "finished", 00:21:32.419 "verify_range": { 00:21:32.419 "start": 0, 00:21:32.419 "length": 8192 00:21:32.419 }, 00:21:32.419 "queue_depth": 128, 00:21:32.419 "io_size": 4096, 00:21:32.419 "runtime": 10.094618, 00:21:32.419 "iops": 3072.726476623484, 00:21:32.420 "mibps": 12.002837799310484, 00:21:32.420 "io_failed": 0, 00:21:32.420 "io_timeout": 0, 00:21:32.420 "avg_latency_us": 41482.188245634046, 00:21:32.420 "min_latency_us": 6251.126153846154, 00:21:32.420 "max_latency_us": 166158.96615384615 00:21:32.420 } 00:21:32.420 ], 00:21:32.420 "core_count": 1 00:21:32.420 } 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:32.420 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:32.420 nvmf_trace.0 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2723570 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2723570 ']' 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2723570 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723570 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723570' 00:21:32.420 killing process with pid 2723570 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2723570 00:21:32.420 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.420 00:21:32.420 Latency(us) 00:21:32.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.420 =================================================================================================================== 00:21:32.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.420 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2723570 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.688 rmmod nvme_tcp 00:21:32.688 rmmod nvme_fabrics 00:21:32.688 rmmod nvme_keyring 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 2723539 ']' 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 2723539 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2723539 ']' 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2723539 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2723539 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2723539' 00:21:32.688 killing process with pid 2723539 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2723539 00:21:32.688 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2723539 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.949 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.sa0 00:21:34.860 00:21:34.860 real 0m21.440s 00:21:34.860 user 0m23.092s 00:21:34.860 sys 0m8.851s 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:34.860 ************************************ 00:21:34.860 END TEST nvmf_fips 00:21:34.860 ************************************ 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:34.860 16:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:35.121 16:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.121 ************************************ 00:21:35.121 START TEST nvmf_control_msg_list 00:21:35.121 ************************************ 00:21:35.121 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:35.121 * Looking for test storage... 00:21:35.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.122 --rc genhtml_branch_coverage=1 00:21:35.122 --rc genhtml_function_coverage=1 00:21:35.122 --rc genhtml_legend=1 00:21:35.122 --rc geninfo_all_blocks=1 00:21:35.122 --rc geninfo_unexecuted_blocks=1 00:21:35.122 00:21:35.122 ' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.122 --rc genhtml_branch_coverage=1 00:21:35.122 --rc genhtml_function_coverage=1 00:21:35.122 --rc genhtml_legend=1 00:21:35.122 --rc geninfo_all_blocks=1 00:21:35.122 --rc geninfo_unexecuted_blocks=1 00:21:35.122 00:21:35.122 ' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.122 --rc genhtml_branch_coverage=1 00:21:35.122 --rc genhtml_function_coverage=1 00:21:35.122 --rc genhtml_legend=1 00:21:35.122 --rc geninfo_all_blocks=1 00:21:35.122 --rc geninfo_unexecuted_blocks=1 00:21:35.122 00:21:35.122 ' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:35.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.122 --rc genhtml_branch_coverage=1 00:21:35.122 --rc genhtml_function_coverage=1 00:21:35.122 --rc genhtml_legend=1 00:21:35.122 --rc geninfo_all_blocks=1 00:21:35.122 --rc geninfo_unexecuted_blocks=1 00:21:35.122 00:21:35.122 ' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.122 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.383 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.968 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:41.969 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:41.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:41.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:41.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.969 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:21:42.230 00:21:42.230 --- 10.0.0.2 ping statistics --- 00:21:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.230 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:21:42.230 00:21:42.230 --- 10.0.0.1 ping statistics --- 00:21:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.230 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=2729387 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 2729387 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2729387 ']' 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.230 16:46:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:42.230 [2024-10-01 16:46:33.832165] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:42.230 [2024-10-01 16:46:33.832232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.490 [2024-10-01 16:46:33.919585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.490 [2024-10-01 16:46:34.009446] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.490 [2024-10-01 16:46:34.009499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.490 [2024-10-01 16:46:34.009507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.490 [2024-10-01 16:46:34.009514] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.490 [2024-10-01 16:46:34.009520] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.490 [2024-10-01 16:46:34.009552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.061 [2024-10-01 16:46:34.701886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.061 Malloc0 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.061 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.322 [2024-10-01 16:46:34.753901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2729632 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2729633 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2729634 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2729632 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:43.322 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:43.322 [2024-10-01 16:46:34.792127] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:43.322 [2024-10-01 16:46:34.812414] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:43.322 [2024-10-01 16:46:34.812803] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:44.261 Initializing NVMe Controllers 00:21:44.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:44.261 Initialization complete. Launching workers. 00:21:44.261 ======================================================== 00:21:44.261 Latency(us) 00:21:44.261 Device Information : IOPS MiB/s Average min max 00:21:44.261 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1750.00 6.84 571.48 242.12 782.28 00:21:44.261 ======================================================== 00:21:44.261 Total : 1750.00 6.84 571.48 242.12 782.28 00:21:44.261 00:21:44.522 Initializing NVMe Controllers 00:21:44.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:44.522 Initialization complete. Launching workers. 00:21:44.522 ======================================================== 00:21:44.522 Latency(us) 00:21:44.522 Device Information : IOPS MiB/s Average min max 00:21:44.522 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1726.00 6.74 579.49 183.69 814.22 00:21:44.522 ======================================================== 00:21:44.522 Total : 1726.00 6.74 579.49 183.69 814.22 00:21:44.522 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2729633 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2729634 00:21:44.522 Initializing NVMe Controllers 00:21:44.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:44.522 Initialization complete. Launching workers. 00:21:44.522 ======================================================== 00:21:44.522 Latency(us) 00:21:44.522 Device Information : IOPS MiB/s Average min max 00:21:44.522 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1728.00 6.75 578.76 240.29 813.40 00:21:44.522 ======================================================== 00:21:44.522 Total : 1728.00 6.75 578.76 240.29 813.40 00:21:44.522 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.522 rmmod nvme_tcp 00:21:44.522 rmmod nvme_fabrics 00:21:44.522 rmmod nvme_keyring 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 2729387 ']' 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 2729387 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2729387 ']' 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2729387 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.522 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2729387 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2729387' 00:21:44.782 killing process with pid 2729387 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2729387 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2729387 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.782 16:46:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.328 00:21:47.328 real 0m11.854s 00:21:47.328 user 0m7.968s 00:21:47.328 sys 0m6.046s 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.328 ************************************ 00:21:47.328 END TEST nvmf_control_msg_list 00:21:47.328 ************************************ 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:47.328 ************************************ 00:21:47.328 START TEST nvmf_wait_for_buf 00:21:47.328 ************************************ 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:47.328 * Looking for test storage... 00:21:47.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:47.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.328 --rc genhtml_branch_coverage=1 00:21:47.328 --rc genhtml_function_coverage=1 00:21:47.328 --rc genhtml_legend=1 00:21:47.328 --rc geninfo_all_blocks=1 00:21:47.328 --rc geninfo_unexecuted_blocks=1 00:21:47.328 00:21:47.328 ' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:47.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.328 --rc genhtml_branch_coverage=1 00:21:47.328 --rc genhtml_function_coverage=1 00:21:47.328 --rc genhtml_legend=1 00:21:47.328 --rc geninfo_all_blocks=1 00:21:47.328 --rc geninfo_unexecuted_blocks=1 00:21:47.328 00:21:47.328 ' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:47.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.328 --rc genhtml_branch_coverage=1 00:21:47.328 --rc genhtml_function_coverage=1 00:21:47.328 --rc genhtml_legend=1 00:21:47.328 --rc geninfo_all_blocks=1 00:21:47.328 --rc geninfo_unexecuted_blocks=1 00:21:47.328 00:21:47.328 ' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:47.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.328 --rc genhtml_branch_coverage=1 00:21:47.328 --rc genhtml_function_coverage=1 00:21:47.328 --rc genhtml_legend=1 00:21:47.328 --rc geninfo_all_blocks=1 00:21:47.328 --rc geninfo_unexecuted_blocks=1 00:21:47.328 00:21:47.328 ' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.328 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.329 16:46:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.908 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:53.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:53.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:53.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:53.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.909 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:21:54.170 00:21:54.170 --- 10.0.0.2 ping statistics --- 00:21:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.170 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:21:54.170 00:21:54.170 --- 10.0.0.1 ping statistics --- 00:21:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.170 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=2733840 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 2733840 00:21:54.170 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2733840 ']' 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:54.171 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:54.171 [2024-10-01 16:46:45.740214] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:21:54.171 [2024-10-01 16:46:45.740265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.171 [2024-10-01 16:46:45.823321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.431 [2024-10-01 16:46:45.914155] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.431 [2024-10-01 16:46:45.914218] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.431 [2024-10-01 16:46:45.914226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.431 [2024-10-01 16:46:45.914233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.431 [2024-10-01 16:46:45.914239] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.432 [2024-10-01 16:46:45.914263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.002 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 Malloc0 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 [2024-10-01 16:46:46.776460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.263 [2024-10-01 16:46:46.800687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.263 16:46:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:55.263 [2024-10-01 16:46:46.879072] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:56.645 Initializing NVMe Controllers 00:21:56.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:56.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:56.646 Initialization complete. Launching workers. 00:21:56.646 ======================================================== 00:21:56.646 Latency(us) 00:21:56.646 Device Information : IOPS MiB/s Average min max 00:21:56.646 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.74 8011.27 63858.09 00:21:56.646 ======================================================== 00:21:56.646 Total : 129.00 16.12 32294.74 8011.27 63858.09 00:21:56.646 00:21:56.646 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:56.646 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:56.646 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.646 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.646 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.906 rmmod nvme_tcp 00:21:56.906 rmmod nvme_fabrics 00:21:56.906 rmmod nvme_keyring 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 2733840 ']' 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 2733840 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2733840 ']' 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2733840 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2733840 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2733840' 00:21:56.906 killing process with pid 2733840 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2733840 00:21:56.906 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2733840 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.167 16:46:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.077 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.077 00:21:59.077 real 0m12.216s 00:21:59.077 user 0m5.030s 00:21:59.077 sys 0m5.744s 00:21:59.077 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.077 16:46:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.077 ************************************ 00:21:59.077 END TEST nvmf_wait_for_buf 00:21:59.077 ************************************ 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.337 16:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.918 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:05.919 16:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:05.919 16:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:05.919 16:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.919 16:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.919 ************************************ 00:22:05.919 START TEST nvmf_perf_adq 00:22:05.919 ************************************ 00:22:05.919 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.186 * Looking for test storage... 00:22:06.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.186 --rc genhtml_branch_coverage=1 00:22:06.186 --rc genhtml_function_coverage=1 00:22:06.186 --rc genhtml_legend=1 00:22:06.186 --rc geninfo_all_blocks=1 00:22:06.186 --rc geninfo_unexecuted_blocks=1 00:22:06.186 00:22:06.186 ' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.186 --rc genhtml_branch_coverage=1 00:22:06.186 --rc genhtml_function_coverage=1 00:22:06.186 --rc genhtml_legend=1 00:22:06.186 --rc geninfo_all_blocks=1 00:22:06.186 --rc geninfo_unexecuted_blocks=1 00:22:06.186 00:22:06.186 ' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.186 --rc genhtml_branch_coverage=1 00:22:06.186 --rc genhtml_function_coverage=1 00:22:06.186 --rc genhtml_legend=1 00:22:06.186 --rc geninfo_all_blocks=1 00:22:06.186 --rc geninfo_unexecuted_blocks=1 00:22:06.186 00:22:06.186 ' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.186 --rc genhtml_branch_coverage=1 00:22:06.186 --rc genhtml_function_coverage=1 00:22:06.186 --rc genhtml_legend=1 00:22:06.186 --rc geninfo_all_blocks=1 00:22:06.186 --rc geninfo_unexecuted_blocks=1 00:22:06.186 00:22:06.186 ' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.186 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.187 16:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:14.398 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.398 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:14.399 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:14.399 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:14.399 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:14.399 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:14.967 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:19.165 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.448 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:24.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:24.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:24.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:24.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:22:24.449 00:22:24.449 --- 10.0.0.2 ping statistics --- 00:22:24.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.449 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:22:24.449 00:22:24.449 --- 10.0.0.1 ping statistics --- 00:22:24.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.449 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2743682 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2743682 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2743682 ']' 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.449 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 [2024-10-01 16:47:15.682622] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:22:24.449 [2024-10-01 16:47:15.682683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.449 [2024-10-01 16:47:15.770087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.449 [2024-10-01 16:47:15.864447] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.449 [2024-10-01 16:47:15.864504] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.449 [2024-10-01 16:47:15.864512] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.449 [2024-10-01 16:47:15.864519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.449 [2024-10-01 16:47:15.864525] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.450 [2024-10-01 16:47:15.864651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.450 [2024-10-01 16:47:15.864793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.450 [2024-10-01 16:47:15.864944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.450 [2024-10-01 16:47:15.864946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.018 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.019 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 [2024-10-01 16:47:16.770857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 Malloc1 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.278 [2024-10-01 16:47:16.826683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2743867 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:25.278 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:27.188 "tick_rate": 2600000000, 00:22:27.188 "poll_groups": [ 00:22:27.188 { 00:22:27.188 "name": "nvmf_tgt_poll_group_000", 00:22:27.188 "admin_qpairs": 1, 00:22:27.188 "io_qpairs": 1, 00:22:27.188 "current_admin_qpairs": 1, 00:22:27.188 "current_io_qpairs": 1, 00:22:27.188 "pending_bdev_io": 0, 00:22:27.188 "completed_nvme_io": 26159, 00:22:27.188 "transports": [ 00:22:27.188 { 00:22:27.188 "trtype": "TCP" 00:22:27.188 } 00:22:27.188 ] 00:22:27.188 }, 00:22:27.188 { 00:22:27.188 "name": "nvmf_tgt_poll_group_001", 00:22:27.188 "admin_qpairs": 0, 00:22:27.188 "io_qpairs": 1, 00:22:27.188 "current_admin_qpairs": 0, 00:22:27.188 "current_io_qpairs": 1, 00:22:27.188 "pending_bdev_io": 0, 00:22:27.188 "completed_nvme_io": 26804, 00:22:27.188 "transports": [ 00:22:27.188 { 00:22:27.188 "trtype": "TCP" 00:22:27.188 } 00:22:27.188 ] 00:22:27.188 }, 00:22:27.188 { 00:22:27.188 "name": "nvmf_tgt_poll_group_002", 00:22:27.188 "admin_qpairs": 0, 00:22:27.188 "io_qpairs": 1, 00:22:27.188 "current_admin_qpairs": 0, 00:22:27.188 "current_io_qpairs": 1, 00:22:27.188 "pending_bdev_io": 0, 00:22:27.188 "completed_nvme_io": 27268, 00:22:27.188 "transports": [ 00:22:27.188 { 00:22:27.188 "trtype": "TCP" 00:22:27.188 } 00:22:27.188 ] 00:22:27.188 }, 00:22:27.188 { 00:22:27.188 "name": "nvmf_tgt_poll_group_003", 00:22:27.188 "admin_qpairs": 0, 00:22:27.188 "io_qpairs": 1, 00:22:27.188 "current_admin_qpairs": 0, 00:22:27.188 "current_io_qpairs": 1, 00:22:27.188 "pending_bdev_io": 0, 00:22:27.188 "completed_nvme_io": 21973, 00:22:27.188 "transports": [ 00:22:27.188 { 00:22:27.188 "trtype": "TCP" 00:22:27.188 } 00:22:27.188 ] 00:22:27.188 } 00:22:27.188 ] 00:22:27.188 }' 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:27.188 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:27.448 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:27.448 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:27.448 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2743867 00:22:35.577 Initializing NVMe Controllers 00:22:35.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:35.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:35.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:35.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:35.578 Initialization complete. Launching workers. 00:22:35.578 ======================================================== 00:22:35.578 Latency(us) 00:22:35.578 Device Information : IOPS MiB/s Average min max 00:22:35.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14376.89 56.16 4452.72 1192.50 7187.53 00:22:35.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14063.00 54.93 4560.83 1256.09 45258.30 00:22:35.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11655.32 45.53 5491.43 1301.65 10176.82 00:22:35.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13889.81 54.26 4608.54 1249.12 9818.12 00:22:35.578 ======================================================== 00:22:35.578 Total : 53985.02 210.88 4745.23 1192.50 45258.30 00:22:35.578 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.578 rmmod nvme_tcp 00:22:35.578 rmmod nvme_fabrics 00:22:35.578 rmmod nvme_keyring 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2743682 ']' 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2743682 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2743682 ']' 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2743682 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2743682 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2743682' 00:22:35.578 killing process with pid 2743682 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2743682 00:22:35.578 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2743682 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.838 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.748 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.748 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:37.748 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:37.748 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:39.657 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:42.200 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:47.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:47.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:47.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:47.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:22:47.485 00:22:47.485 --- 10.0.0.2 ping statistics --- 00:22:47.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.485 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:22:47.485 00:22:47.485 --- 10.0.0.1 ping statistics --- 00:22:47.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.485 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:47.485 net.core.busy_poll = 1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:47.485 net.core.busy_read = 1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2747804 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2747804 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2747804 ']' 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.485 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.485 [2024-10-01 16:47:39.050520] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:22:47.485 [2024-10-01 16:47:39.050569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.485 [2024-10-01 16:47:39.134256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.745 [2024-10-01 16:47:39.201985] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.745 [2024-10-01 16:47:39.202038] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.745 [2024-10-01 16:47:39.202046] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.745 [2024-10-01 16:47:39.202052] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.745 [2024-10-01 16:47:39.202058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.745 [2024-10-01 16:47:39.202193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.745 [2024-10-01 16:47:39.202326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.745 [2024-10-01 16:47:39.202446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.745 [2024-10-01 16:47:39.202449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.314 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 [2024-10-01 16:47:40.064934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 Malloc1 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 [2024-10-01 16:47:40.120945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2748108 00:22:48.574 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:48.575 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:50.510 "tick_rate": 2600000000, 00:22:50.510 "poll_groups": [ 00:22:50.510 { 00:22:50.510 "name": "nvmf_tgt_poll_group_000", 00:22:50.510 "admin_qpairs": 1, 00:22:50.510 "io_qpairs": 1, 00:22:50.510 "current_admin_qpairs": 1, 00:22:50.510 "current_io_qpairs": 1, 00:22:50.510 "pending_bdev_io": 0, 00:22:50.510 "completed_nvme_io": 38960, 00:22:50.510 "transports": [ 00:22:50.510 { 00:22:50.510 "trtype": "TCP" 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 }, 00:22:50.510 { 00:22:50.510 "name": "nvmf_tgt_poll_group_001", 00:22:50.510 "admin_qpairs": 0, 00:22:50.510 "io_qpairs": 3, 00:22:50.510 "current_admin_qpairs": 0, 00:22:50.510 "current_io_qpairs": 3, 00:22:50.510 "pending_bdev_io": 0, 00:22:50.510 "completed_nvme_io": 39627, 00:22:50.510 "transports": [ 00:22:50.510 { 00:22:50.510 "trtype": "TCP" 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 }, 00:22:50.510 { 00:22:50.510 "name": "nvmf_tgt_poll_group_002", 00:22:50.510 "admin_qpairs": 0, 00:22:50.510 "io_qpairs": 0, 00:22:50.510 "current_admin_qpairs": 0, 00:22:50.510 "current_io_qpairs": 0, 00:22:50.510 "pending_bdev_io": 0, 00:22:50.510 "completed_nvme_io": 0, 00:22:50.510 "transports": [ 00:22:50.510 { 00:22:50.510 "trtype": "TCP" 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 }, 00:22:50.510 { 00:22:50.510 "name": "nvmf_tgt_poll_group_003", 00:22:50.510 "admin_qpairs": 0, 00:22:50.510 "io_qpairs": 0, 00:22:50.510 "current_admin_qpairs": 0, 00:22:50.510 "current_io_qpairs": 0, 00:22:50.510 "pending_bdev_io": 0, 00:22:50.510 "completed_nvme_io": 0, 00:22:50.510 "transports": [ 00:22:50.510 { 00:22:50.510 "trtype": "TCP" 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 }' 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:50.510 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:50.769 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:50.769 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:50.769 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2748108 00:22:58.892 Initializing NVMe Controllers 00:22:58.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:58.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:58.892 Initialization complete. Launching workers. 00:22:58.892 ======================================================== 00:22:58.892 Latency(us) 00:22:58.892 Device Information : IOPS MiB/s Average min max 00:22:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6907.60 26.98 9273.83 1359.57 56341.32 00:22:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7112.60 27.78 8997.68 1236.50 56318.59 00:22:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6216.30 24.28 10318.58 1047.56 56428.23 00:22:58.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 20448.50 79.88 3129.25 959.08 44579.79 00:22:58.892 ======================================================== 00:22:58.892 Total : 40685.00 158.93 6296.88 959.08 56428.23 00:22:58.892 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.892 rmmod nvme_tcp 00:22:58.892 rmmod nvme_fabrics 00:22:58.892 rmmod nvme_keyring 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2747804 ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2747804 ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2747804' 00:22:58.892 killing process with pid 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2747804 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.892 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:02.184 00:23:02.184 real 0m56.054s 00:23:02.184 user 2m49.875s 00:23:02.184 sys 0m12.760s 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.184 ************************************ 00:23:02.184 END TEST nvmf_perf_adq 00:23:02.184 ************************************ 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.184 ************************************ 00:23:02.184 START TEST nvmf_shutdown 00:23:02.184 ************************************ 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:02.184 * Looking for test storage... 00:23:02.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:02.184 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.445 --rc genhtml_branch_coverage=1 00:23:02.445 --rc genhtml_function_coverage=1 00:23:02.445 --rc genhtml_legend=1 00:23:02.445 --rc geninfo_all_blocks=1 00:23:02.445 --rc geninfo_unexecuted_blocks=1 00:23:02.445 00:23:02.445 ' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.445 --rc genhtml_branch_coverage=1 00:23:02.445 --rc genhtml_function_coverage=1 00:23:02.445 --rc genhtml_legend=1 00:23:02.445 --rc geninfo_all_blocks=1 00:23:02.445 --rc geninfo_unexecuted_blocks=1 00:23:02.445 00:23:02.445 ' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.445 --rc genhtml_branch_coverage=1 00:23:02.445 --rc genhtml_function_coverage=1 00:23:02.445 --rc genhtml_legend=1 00:23:02.445 --rc geninfo_all_blocks=1 00:23:02.445 --rc geninfo_unexecuted_blocks=1 00:23:02.445 00:23:02.445 ' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:02.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.445 --rc genhtml_branch_coverage=1 00:23:02.445 --rc genhtml_function_coverage=1 00:23:02.445 --rc genhtml_legend=1 00:23:02.445 --rc geninfo_all_blocks=1 00:23:02.445 --rc geninfo_unexecuted_blocks=1 00:23:02.445 00:23:02.445 ' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.445 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:02.446 ************************************ 00:23:02.446 START TEST nvmf_shutdown_tc1 00:23:02.446 ************************************ 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.446 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.581 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.582 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:23:10.582 00:23:10.582 --- 10.0.0.2 ping statistics --- 00:23:10.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.582 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:10.582 00:23:10.582 --- 10.0.0.1 ping statistics --- 00:23:10.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.582 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=2754052 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 2754052 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2754052 ']' 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:10.582 [2024-10-01 16:48:01.139736] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:10.582 [2024-10-01 16:48:01.139796] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.582 [2024-10-01 16:48:01.202700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.582 [2024-10-01 16:48:01.270635] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.582 [2024-10-01 16:48:01.270673] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.582 [2024-10-01 16:48:01.270679] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.582 [2024-10-01 16:48:01.270688] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.582 [2024-10-01 16:48:01.270692] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.582 [2024-10-01 16:48:01.270800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.582 [2024-10-01 16:48:01.270937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.582 [2024-10-01 16:48:01.271072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.582 [2024-10-01 16:48:01.271257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 [2024-10-01 16:48:01.422637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.582 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.583 Malloc1 00:23:10.583 [2024-10-01 16:48:01.525482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.583 Malloc2 00:23:10.583 Malloc3 00:23:10.583 Malloc4 00:23:10.583 Malloc5 00:23:10.583 Malloc6 00:23:10.583 Malloc7 00:23:10.583 Malloc8 00:23:10.583 Malloc9 00:23:10.583 Malloc10 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2754424 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2754424 /var/tmp/bdevperf.sock 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2754424 ']' 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.583 "method": "bdev_nvme_attach_controller" 00:23:10.583 } 00:23:10.583 EOF 00:23:10.583 )") 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.583 [2024-10-01 16:48:01.969755] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:10.583 [2024-10-01 16:48:01.969807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.583 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.583 { 00:23:10.583 "params": { 00:23:10.583 "name": "Nvme$subsystem", 00:23:10.583 "trtype": "$TEST_TRANSPORT", 00:23:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.583 "adrfam": "ipv4", 00:23:10.583 "trsvcid": "$NVMF_PORT", 00:23:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.583 "hdgst": ${hdgst:-false}, 00:23:10.583 "ddgst": ${ddgst:-false} 00:23:10.583 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 } 00:23:10.584 EOF 00:23:10.584 )") 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.584 { 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme$subsystem", 00:23:10.584 "trtype": "$TEST_TRANSPORT", 00:23:10.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "$NVMF_PORT", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.584 "hdgst": ${hdgst:-false}, 00:23:10.584 "ddgst": ${ddgst:-false} 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 } 00:23:10.584 EOF 00:23:10.584 )") 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.584 { 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme$subsystem", 00:23:10.584 "trtype": "$TEST_TRANSPORT", 00:23:10.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "$NVMF_PORT", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.584 "hdgst": ${hdgst:-false}, 00:23:10.584 "ddgst": ${ddgst:-false} 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 } 00:23:10.584 EOF 00:23:10.584 )") 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:10.584 { 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme$subsystem", 00:23:10.584 "trtype": "$TEST_TRANSPORT", 00:23:10.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "$NVMF_PORT", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.584 "hdgst": ${hdgst:-false}, 00:23:10.584 "ddgst": ${ddgst:-false} 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 } 00:23:10.584 EOF 00:23:10.584 )") 00:23:10.584 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:10.584 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:23:10.584 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:23:10.584 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme1", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme2", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme3", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme4", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme5", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme6", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme7", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme8", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme9", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 },{ 00:23:10.584 "params": { 00:23:10.584 "name": "Nvme10", 00:23:10.584 "trtype": "tcp", 00:23:10.584 "traddr": "10.0.0.2", 00:23:10.584 "adrfam": "ipv4", 00:23:10.584 "trsvcid": "4420", 00:23:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.584 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.584 "hdgst": false, 00:23:10.584 "ddgst": false 00:23:10.584 }, 00:23:10.584 "method": "bdev_nvme_attach_controller" 00:23:10.584 }' 00:23:10.584 [2024-10-01 16:48:02.048289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.584 [2024-10-01 16:48:02.110790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2754424 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:11.965 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:12.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2754424 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2754052 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 [2024-10-01 16:48:04.403391] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:12.904 [2024-10-01 16:48:04.403443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754778 ] 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.904 EOF 00:23:12.904 )") 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.904 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.904 { 00:23:12.904 "params": { 00:23:12.904 "name": "Nvme$subsystem", 00:23:12.904 "trtype": "$TEST_TRANSPORT", 00:23:12.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.904 "adrfam": "ipv4", 00:23:12.904 "trsvcid": "$NVMF_PORT", 00:23:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.904 "hdgst": ${hdgst:-false}, 00:23:12.904 "ddgst": ${ddgst:-false} 00:23:12.904 }, 00:23:12.904 "method": "bdev_nvme_attach_controller" 00:23:12.904 } 00:23:12.905 EOF 00:23:12.905 )") 00:23:12.905 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:12.905 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:23:12.905 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:23:12.905 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme1", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme2", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme3", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme4", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme5", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme6", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme7", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme8", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme9", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 },{ 00:23:12.905 "params": { 00:23:12.905 "name": "Nvme10", 00:23:12.905 "trtype": "tcp", 00:23:12.905 "traddr": "10.0.0.2", 00:23:12.905 "adrfam": "ipv4", 00:23:12.905 "trsvcid": "4420", 00:23:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:12.905 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:12.905 "hdgst": false, 00:23:12.905 "ddgst": false 00:23:12.905 }, 00:23:12.905 "method": "bdev_nvme_attach_controller" 00:23:12.905 }' 00:23:12.905 [2024-10-01 16:48:04.482314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.905 [2024-10-01 16:48:04.544070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.286 Running I/O for 1 seconds... 00:23:15.487 2050.00 IOPS, 128.12 MiB/s 00:23:15.487 Latency(us) 00:23:15.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.487 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme1n1 : 1.09 233.95 14.62 0.00 0.00 265891.25 20870.70 225847.14 00:23:15.487 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme2n1 : 1.11 230.41 14.40 0.00 0.00 270770.61 18450.90 225847.14 00:23:15.487 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme3n1 : 1.17 274.10 17.13 0.00 0.00 224286.09 16232.76 245205.46 00:23:15.487 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme4n1 : 1.14 281.71 17.61 0.00 0.00 214565.02 10586.58 230686.72 00:23:15.487 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme5n1 : 1.17 272.72 17.04 0.00 0.00 218413.21 13208.02 246818.66 00:23:15.487 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme6n1 : 1.11 231.43 14.46 0.00 0.00 251619.64 16736.89 229073.53 00:23:15.487 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme7n1 : 1.17 272.51 17.03 0.00 0.00 211484.20 15123.69 227460.33 00:23:15.487 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme8n1 : 1.16 276.13 17.26 0.00 0.00 204203.09 16736.89 222620.75 00:23:15.487 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme9n1 : 1.18 274.40 17.15 0.00 0.00 203383.94 6805.66 235526.30 00:23:15.487 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.487 Verification LBA range: start 0x0 length 0x400 00:23:15.487 Nvme10n1 : 1.19 269.71 16.86 0.00 0.00 203665.33 11292.36 254884.63 00:23:15.487 =================================================================================================================== 00:23:15.487 Total : 2617.06 163.57 0.00 0.00 224506.60 6805.66 254884.63 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.748 rmmod nvme_tcp 00:23:15.748 rmmod nvme_fabrics 00:23:15.748 rmmod nvme_keyring 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 2754052 ']' 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 2754052 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2754052 ']' 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2754052 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2754052 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2754052' 00:23:15.748 killing process with pid 2754052 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2754052 00:23:15.748 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2754052 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.008 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.550 00:23:18.550 real 0m15.675s 00:23:18.550 user 0m31.035s 00:23:18.550 sys 0m6.409s 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.550 ************************************ 00:23:18.550 END TEST nvmf_shutdown_tc1 00:23:18.550 ************************************ 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.550 ************************************ 00:23:18.550 START TEST nvmf_shutdown_tc2 00:23:18.550 ************************************ 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:18.550 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.551 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.551 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.551 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.551 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.551 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:23:18.551 00:23:18.551 --- 10.0.0.2 ping statistics --- 00:23:18.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.551 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:23:18.551 00:23:18.551 --- 10.0.0.1 ping statistics --- 00:23:18.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.551 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:18.551 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2756106 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2756106 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2756106 ']' 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.552 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.552 [2024-10-01 16:48:10.173583] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:18.552 [2024-10-01 16:48:10.173653] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.811 [2024-10-01 16:48:10.238997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.811 [2024-10-01 16:48:10.305366] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.811 [2024-10-01 16:48:10.305402] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.811 [2024-10-01 16:48:10.305409] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.811 [2024-10-01 16:48:10.305414] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.811 [2024-10-01 16:48:10.305419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.811 [2024-10-01 16:48:10.305577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.811 [2024-10-01 16:48:10.305702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.811 [2024-10-01 16:48:10.305808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.811 [2024-10-01 16:48:10.305810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.811 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.812 [2024-10-01 16:48:10.454704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:18.812 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.071 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.071 Malloc1 00:23:19.071 [2024-10-01 16:48:10.557695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.071 Malloc2 00:23:19.071 Malloc3 00:23:19.071 Malloc4 00:23:19.071 Malloc5 00:23:19.071 Malloc6 00:23:19.331 Malloc7 00:23:19.331 Malloc8 00:23:19.331 Malloc9 00:23:19.331 Malloc10 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2756553 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2756553 /var/tmp/bdevperf.sock 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2756553 ']' 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.331 { 00:23:19.331 "params": { 00:23:19.331 "name": "Nvme$subsystem", 00:23:19.331 "trtype": "$TEST_TRANSPORT", 00:23:19.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.331 "adrfam": "ipv4", 00:23:19.331 "trsvcid": "$NVMF_PORT", 00:23:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.331 "hdgst": ${hdgst:-false}, 00:23:19.331 "ddgst": ${ddgst:-false} 00:23:19.331 }, 00:23:19.331 "method": "bdev_nvme_attach_controller" 00:23:19.331 } 00:23:19.331 EOF 00:23:19.331 )") 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.331 { 00:23:19.331 "params": { 00:23:19.331 "name": "Nvme$subsystem", 00:23:19.331 "trtype": "$TEST_TRANSPORT", 00:23:19.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.331 "adrfam": "ipv4", 00:23:19.331 "trsvcid": "$NVMF_PORT", 00:23:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.331 "hdgst": ${hdgst:-false}, 00:23:19.331 "ddgst": ${ddgst:-false} 00:23:19.331 }, 00:23:19.331 "method": "bdev_nvme_attach_controller" 00:23:19.331 } 00:23:19.331 EOF 00:23:19.331 )") 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.331 { 00:23:19.331 "params": { 00:23:19.331 "name": "Nvme$subsystem", 00:23:19.331 "trtype": "$TEST_TRANSPORT", 00:23:19.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.331 "adrfam": "ipv4", 00:23:19.331 "trsvcid": "$NVMF_PORT", 00:23:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.331 "hdgst": ${hdgst:-false}, 00:23:19.331 "ddgst": ${ddgst:-false} 00:23:19.331 }, 00:23:19.331 "method": "bdev_nvme_attach_controller" 00:23:19.331 } 00:23:19.331 EOF 00:23:19.331 )") 00:23:19.331 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.332 { 00:23:19.332 "params": { 00:23:19.332 "name": "Nvme$subsystem", 00:23:19.332 "trtype": "$TEST_TRANSPORT", 00:23:19.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.332 "adrfam": "ipv4", 00:23:19.332 "trsvcid": "$NVMF_PORT", 00:23:19.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.332 "hdgst": ${hdgst:-false}, 00:23:19.332 "ddgst": ${ddgst:-false} 00:23:19.332 }, 00:23:19.332 "method": "bdev_nvme_attach_controller" 00:23:19.332 } 00:23:19.332 EOF 00:23:19.332 )") 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.332 { 00:23:19.332 "params": { 00:23:19.332 "name": "Nvme$subsystem", 00:23:19.332 "trtype": "$TEST_TRANSPORT", 00:23:19.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.332 "adrfam": "ipv4", 00:23:19.332 "trsvcid": "$NVMF_PORT", 00:23:19.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.332 "hdgst": ${hdgst:-false}, 00:23:19.332 "ddgst": ${ddgst:-false} 00:23:19.332 }, 00:23:19.332 "method": "bdev_nvme_attach_controller" 00:23:19.332 } 00:23:19.332 EOF 00:23:19.332 )") 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.332 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.332 { 00:23:19.332 "params": { 00:23:19.332 "name": "Nvme$subsystem", 00:23:19.332 "trtype": "$TEST_TRANSPORT", 00:23:19.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.332 "adrfam": "ipv4", 00:23:19.332 "trsvcid": "$NVMF_PORT", 00:23:19.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.332 "hdgst": ${hdgst:-false}, 00:23:19.332 "ddgst": ${ddgst:-false} 00:23:19.332 }, 00:23:19.332 "method": "bdev_nvme_attach_controller" 00:23:19.332 } 00:23:19.332 EOF 00:23:19.332 )") 00:23:19.332 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.332 [2024-10-01 16:48:11.006102] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:19.332 [2024-10-01 16:48:11.006155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756553 ] 00:23:19.332 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.332 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.332 { 00:23:19.332 "params": { 00:23:19.332 "name": "Nvme$subsystem", 00:23:19.332 "trtype": "$TEST_TRANSPORT", 00:23:19.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.332 "adrfam": "ipv4", 00:23:19.332 "trsvcid": "$NVMF_PORT", 00:23:19.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.332 "hdgst": ${hdgst:-false}, 00:23:19.332 "ddgst": ${ddgst:-false} 00:23:19.332 }, 00:23:19.332 "method": "bdev_nvme_attach_controller" 00:23:19.332 } 00:23:19.332 EOF 00:23:19.332 )") 00:23:19.332 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.592 { 00:23:19.592 "params": { 00:23:19.592 "name": "Nvme$subsystem", 00:23:19.592 "trtype": "$TEST_TRANSPORT", 00:23:19.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.592 "adrfam": "ipv4", 00:23:19.592 "trsvcid": "$NVMF_PORT", 00:23:19.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.592 "hdgst": ${hdgst:-false}, 00:23:19.592 "ddgst": ${ddgst:-false} 00:23:19.592 }, 00:23:19.592 "method": "bdev_nvme_attach_controller" 00:23:19.592 } 00:23:19.592 EOF 00:23:19.592 )") 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.592 { 00:23:19.592 "params": { 00:23:19.592 "name": "Nvme$subsystem", 00:23:19.592 "trtype": "$TEST_TRANSPORT", 00:23:19.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.592 "adrfam": "ipv4", 00:23:19.592 "trsvcid": "$NVMF_PORT", 00:23:19.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.592 "hdgst": ${hdgst:-false}, 00:23:19.592 "ddgst": ${ddgst:-false} 00:23:19.592 }, 00:23:19.592 "method": "bdev_nvme_attach_controller" 00:23:19.592 } 00:23:19.592 EOF 00:23:19.592 )") 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:19.592 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:19.592 { 00:23:19.592 "params": { 00:23:19.592 "name": "Nvme$subsystem", 00:23:19.592 "trtype": "$TEST_TRANSPORT", 00:23:19.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.592 "adrfam": "ipv4", 00:23:19.592 "trsvcid": "$NVMF_PORT", 00:23:19.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.593 "hdgst": ${hdgst:-false}, 00:23:19.593 "ddgst": ${ddgst:-false} 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 } 00:23:19.593 EOF 00:23:19.593 )") 00:23:19.593 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:19.593 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:23:19.593 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:23:19.593 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme1", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme2", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme3", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme4", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme5", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme6", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme7", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme8", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme9", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 },{ 00:23:19.593 "params": { 00:23:19.593 "name": "Nvme10", 00:23:19.593 "trtype": "tcp", 00:23:19.593 "traddr": "10.0.0.2", 00:23:19.593 "adrfam": "ipv4", 00:23:19.593 "trsvcid": "4420", 00:23:19.593 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.593 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.593 "hdgst": false, 00:23:19.593 "ddgst": false 00:23:19.593 }, 00:23:19.593 "method": "bdev_nvme_attach_controller" 00:23:19.593 }' 00:23:19.593 [2024-10-01 16:48:11.082137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.593 [2024-10-01 16:48:11.144849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.977 Running I/O for 10 seconds... 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.547 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.547 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.547 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:21.547 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:21.547 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:21.806 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2756553 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2756553 ']' 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2756553 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2756553 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2756553' 00:23:21.807 killing process with pid 2756553 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2756553 00:23:21.807 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2756553 00:23:21.807 Received shutdown signal, test time was about 0.786396 seconds 00:23:21.807 00:23:21.807 Latency(us) 00:23:21.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme1n1 : 0.75 254.36 15.90 0.00 0.00 247917.75 34078.72 216167.98 00:23:21.807 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme2n1 : 0.76 252.79 15.80 0.00 0.00 243459.41 32868.82 208102.01 00:23:21.807 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme3n1 : 0.78 327.80 20.49 0.00 0.00 183564.11 11645.24 235526.30 00:23:21.807 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme4n1 : 0.76 251.58 15.72 0.00 0.00 233062.93 13611.32 230686.72 00:23:21.807 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme5n1 : 0.77 248.78 15.55 0.00 0.00 229242.62 42951.29 206488.81 00:23:21.807 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme6n1 : 0.77 248.49 15.53 0.00 0.00 223577.53 28835.84 224233.94 00:23:21.807 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme7n1 : 0.76 254.01 15.88 0.00 0.00 210690.89 22080.59 233913.11 00:23:21.807 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme8n1 : 0.77 248.21 15.51 0.00 0.00 212190.65 11846.89 233913.11 00:23:21.807 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme9n1 : 0.79 244.41 15.28 0.00 0.00 210856.57 16434.41 254884.63 00:23:21.807 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.807 Verification LBA range: start 0x0 length 0x400 00:23:21.807 Nvme10n1 : 0.78 246.68 15.42 0.00 0.00 202604.70 20568.22 232299.91 00:23:21.807 =================================================================================================================== 00:23:21.807 Total : 2577.11 161.07 0.00 0.00 218550.50 11645.24 254884.63 00:23:22.066 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.127 rmmod nvme_tcp 00:23:23.127 rmmod nvme_fabrics 00:23:23.127 rmmod nvme_keyring 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 2756106 ']' 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2756106 ']' 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2756106' 00:23:23.127 killing process with pid 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2756106 00:23:23.127 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2756106 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.429 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.970 00:23:25.970 real 0m7.315s 00:23:25.970 user 0m21.419s 00:23:25.970 sys 0m1.222s 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.970 ************************************ 00:23:25.970 END TEST nvmf_shutdown_tc2 00:23:25.970 ************************************ 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.970 ************************************ 00:23:25.970 START TEST nvmf_shutdown_tc3 00:23:25.970 ************************************ 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:25.970 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:23:25.971 00:23:25.971 --- 10.0.0.2 ping statistics --- 00:23:25.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.971 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:25.971 00:23:25.971 --- 10.0.0.1 ping statistics --- 00:23:25.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.971 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:25.971 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=2757660 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 2757660 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2757660 ']' 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.972 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.972 [2024-10-01 16:48:17.458531] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:25.972 [2024-10-01 16:48:17.458586] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.972 [2024-10-01 16:48:17.519211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.972 [2024-10-01 16:48:17.581833] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.972 [2024-10-01 16:48:17.581867] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.972 [2024-10-01 16:48:17.581873] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.972 [2024-10-01 16:48:17.581878] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.972 [2024-10-01 16:48:17.581882] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.972 [2024-10-01 16:48:17.581992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.972 [2024-10-01 16:48:17.582111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.972 [2024-10-01 16:48:17.582265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.972 [2024-10-01 16:48:17.582267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 [2024-10-01 16:48:17.724596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.232 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.233 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.233 Malloc1 00:23:26.233 [2024-10-01 16:48:17.823283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.233 Malloc2 00:23:26.233 Malloc3 00:23:26.492 Malloc4 00:23:26.492 Malloc5 00:23:26.492 Malloc6 00:23:26.492 Malloc7 00:23:26.492 Malloc8 00:23:26.492 Malloc9 00:23:26.492 Malloc10 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2757985 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2757985 /var/tmp/bdevperf.sock 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2757985 ']' 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.752 { 00:23:26.752 "params": { 00:23:26.752 "name": "Nvme$subsystem", 00:23:26.752 "trtype": "$TEST_TRANSPORT", 00:23:26.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.752 "adrfam": "ipv4", 00:23:26.752 "trsvcid": "$NVMF_PORT", 00:23:26.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.752 "hdgst": ${hdgst:-false}, 00:23:26.752 "ddgst": ${ddgst:-false} 00:23:26.752 }, 00:23:26.752 "method": "bdev_nvme_attach_controller" 00:23:26.752 } 00:23:26.752 EOF 00:23:26.752 )") 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.752 { 00:23:26.752 "params": { 00:23:26.752 "name": "Nvme$subsystem", 00:23:26.752 "trtype": "$TEST_TRANSPORT", 00:23:26.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.752 "adrfam": "ipv4", 00:23:26.752 "trsvcid": "$NVMF_PORT", 00:23:26.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.752 "hdgst": ${hdgst:-false}, 00:23:26.752 "ddgst": ${ddgst:-false} 00:23:26.752 }, 00:23:26.752 "method": "bdev_nvme_attach_controller" 00:23:26.752 } 00:23:26.752 EOF 00:23:26.752 )") 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.752 { 00:23:26.752 "params": { 00:23:26.752 "name": "Nvme$subsystem", 00:23:26.752 "trtype": "$TEST_TRANSPORT", 00:23:26.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.752 "adrfam": "ipv4", 00:23:26.752 "trsvcid": "$NVMF_PORT", 00:23:26.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.752 "hdgst": ${hdgst:-false}, 00:23:26.752 "ddgst": ${ddgst:-false} 00:23:26.752 }, 00:23:26.752 "method": "bdev_nvme_attach_controller" 00:23:26.752 } 00:23:26.752 EOF 00:23:26.752 )") 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.752 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.752 { 00:23:26.752 "params": { 00:23:26.752 "name": "Nvme$subsystem", 00:23:26.752 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 [2024-10-01 16:48:18.265955] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:26.753 [2024-10-01 16:48:18.266013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757985 ] 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.753 { 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme$subsystem", 00:23:26.753 "trtype": "$TEST_TRANSPORT", 00:23:26.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "$NVMF_PORT", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.753 "hdgst": ${hdgst:-false}, 00:23:26.753 "ddgst": ${ddgst:-false} 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 } 00:23:26.753 EOF 00:23:26.753 )") 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:23:26.753 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme1", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme2", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme3", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme4", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme5", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme6", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme7", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme8", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme9", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 },{ 00:23:26.753 "params": { 00:23:26.753 "name": "Nvme10", 00:23:26.753 "trtype": "tcp", 00:23:26.753 "traddr": "10.0.0.2", 00:23:26.753 "adrfam": "ipv4", 00:23:26.753 "trsvcid": "4420", 00:23:26.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.753 "hdgst": false, 00:23:26.753 "ddgst": false 00:23:26.753 }, 00:23:26.753 "method": "bdev_nvme_attach_controller" 00:23:26.753 }' 00:23:26.753 [2024-10-01 16:48:18.343012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.753 [2024-10-01 16:48:18.405145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.132 Running I/O for 10 seconds... 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:28.133 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.392 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:28.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:28.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:28.929 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:28.929 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:28.929 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:28.929 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2757660 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2757660 ']' 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2757660 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2757660 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2757660' 00:23:28.930 killing process with pid 2757660 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2757660 00:23:28.930 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2757660 00:23:28.930 [2024-10-01 16:48:20.483584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.483962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec9d0 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.930 [2024-10-01 16:48:20.485346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.485465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef410 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.487542] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.931 [2024-10-01 16:48:20.488025] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.931 [2024-10-01 16:48:20.491096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.931 [2024-10-01 16:48:20.491409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.491413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.491418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.491423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecea0 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.492995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced370 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.493321] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.932 [2024-10-01 16:48:20.493964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.493990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298c060 is same with the state(6) to be set 00:23:28.932 [2024-10-01 16:48:20.494077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.932 [2024-10-01 16:48:20.494127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.932 [2024-10-01 16:48:20.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250e3b0 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.494163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fc100 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.494265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.933 [2024-10-01 16:48:20.494321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.933 [2024-10-01 16:48:20.494327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fed60 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.494942] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.933 [2024-10-01 16:48:20.500569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.933 [2024-10-01 16:48:20.500887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.500892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.500897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.500902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.500907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.500912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced860 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.501898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcedd30 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.934 [2024-10-01 16:48:20.502786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.502999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee0b0 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.503766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee580 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.935 [2024-10-01 16:48:20.504316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceea70 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.504998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.936 [2024-10-01 16:48:20.505199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.505229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceef40 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2935f60 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242b610 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298c060 (9): Bad file descriptor 00:23:28.937 [2024-10-01 16:48:20.515414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e3b0 (9): Bad file descriptor 00:23:28.937 [2024-10-01 16:48:20.515430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fc100 (9): Bad file descriptor 00:23:28.937 [2024-10-01 16:48:20.515454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2938700 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296f5c0 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d370 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.515689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fed60 (9): Bad file descriptor 00:23:28.937 [2024-10-01 16:48:20.515711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.937 [2024-10-01 16:48:20.515764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.515771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296e1d0 is same with the state(6) to be set 00:23:28.937 [2024-10-01 16:48:20.516869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.516984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.516993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.517000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.517009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.517016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.937 [2024-10-01 16:48:20.517024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.937 [2024-10-01 16:48:20.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.938 [2024-10-01 16:48:20.517587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.938 [2024-10-01 16:48:20.517593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.517916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.517978] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2916840 was disconnected and freed. reset controller. 00:23:28.939 [2024-10-01 16:48:20.519328] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.939 [2024-10-01 16:48:20.519355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:28.939 [2024-10-01 16:48:20.519372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296f5c0 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.519556] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.939 [2024-10-01 16:48:20.519631] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.939 [2024-10-01 16:48:20.519665] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.939 [2024-10-01 16:48:20.519704] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:28.939 [2024-10-01 16:48:20.520336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.939 [2024-10-01 16:48:20.520353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296f5c0 with addr=10.0.0.2, port=4420 00:23:28.939 [2024-10-01 16:48:20.520361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296f5c0 is same with the state(6) to be set 00:23:28.939 [2024-10-01 16:48:20.520427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296f5c0 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.520476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:28.939 [2024-10-01 16:48:20.520484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:28.939 [2024-10-01 16:48:20.520493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:28.939 [2024-10-01 16:48:20.520539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.939 [2024-10-01 16:48:20.525190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2935f60 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.525211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242b610 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.525247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2938700 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.525266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250d370 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.525287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296e1d0 (9): Bad file descriptor 00:23:28.939 [2024-10-01 16:48:20.525396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.939 [2024-10-01 16:48:20.525573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.939 [2024-10-01 16:48:20.525582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.525984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.525991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.940 [2024-10-01 16:48:20.526215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.940 [2024-10-01 16:48:20.526224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.526431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2718fa0 is same with the state(6) to be set 00:23:28.941 [2024-10-01 16:48:20.527611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.527987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.527993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.528027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.528043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.528059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.941 [2024-10-01 16:48:20.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.941 [2024-10-01 16:48:20.528084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.942 [2024-10-01 16:48:20.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.942 [2024-10-01 16:48:20.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.528678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271a2a0 is same with the state(6) to be set 00:23:28.943 [2024-10-01 16:48:20.529851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.529984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.529995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.943 [2024-10-01 16:48:20.530491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.943 [2024-10-01 16:48:20.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.530911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ff720 is same with the state(6) to be set 00:23:28.944 [2024-10-01 16:48:20.532120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.944 [2024-10-01 16:48:20.532359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.944 [2024-10-01 16:48:20.532366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.532984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.532991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.945 [2024-10-01 16:48:20.533001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.945 [2024-10-01 16:48:20.533007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.533168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.533176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291a780 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.534629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.946 [2024-10-01 16:48:20.534653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:28.946 [2024-10-01 16:48:20.534662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:28.946 [2024-10-01 16:48:20.534774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:28.946 [2024-10-01 16:48:20.535172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.946 [2024-10-01 16:48:20.535186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fed60 with addr=10.0.0.2, port=4420 00:23:28.946 [2024-10-01 16:48:20.535193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fed60 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.535506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.946 [2024-10-01 16:48:20.535515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250e3b0 with addr=10.0.0.2, port=4420 00:23:28.946 [2024-10-01 16:48:20.535522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250e3b0 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.535860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.946 [2024-10-01 16:48:20.535870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fc100 with addr=10.0.0.2, port=4420 00:23:28.946 [2024-10-01 16:48:20.535877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fc100 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.536860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:28.946 [2024-10-01 16:48:20.537172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.946 [2024-10-01 16:48:20.537185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x298c060 with addr=10.0.0.2, port=4420 00:23:28.946 [2024-10-01 16:48:20.537192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298c060 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.537201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fed60 (9): Bad file descriptor 00:23:28.946 [2024-10-01 16:48:20.537210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e3b0 (9): Bad file descriptor 00:23:28.946 [2024-10-01 16:48:20.537218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fc100 (9): Bad file descriptor 00:23:28.946 [2024-10-01 16:48:20.537653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.946 [2024-10-01 16:48:20.537666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296f5c0 with addr=10.0.0.2, port=4420 00:23:28.946 [2024-10-01 16:48:20.537673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296f5c0 is same with the state(6) to be set 00:23:28.946 [2024-10-01 16:48:20.537681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298c060 (9): Bad file descriptor 00:23:28.946 [2024-10-01 16:48:20.537689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.946 [2024-10-01 16:48:20.537695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:28.946 [2024-10-01 16:48:20.537703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.946 [2024-10-01 16:48:20.537714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:28.946 [2024-10-01 16:48:20.537720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:28.946 [2024-10-01 16:48:20.537726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:28.946 [2024-10-01 16:48:20.537736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:28.946 [2024-10-01 16:48:20.537742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:28.946 [2024-10-01 16:48:20.537748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:28.946 [2024-10-01 16:48:20.537797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.537984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.537991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.538000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.538007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.538016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.538023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.538031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.946 [2024-10-01 16:48:20.538038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.946 [2024-10-01 16:48:20.538047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.947 [2024-10-01 16:48:20.538612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.947 [2024-10-01 16:48:20.538620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.538817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.538824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a00c80 is same with the state(6) to be set 00:23:28.948 [2024-10-01 16:48:20.540001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.948 [2024-10-01 16:48:20.540448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.948 [2024-10-01 16:48:20.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.540991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.541007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.541014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.541023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.541030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.541045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.541053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a021c0 is same with the state(6) to be set 00:23:28.949 [2024-10-01 16:48:20.542217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.542230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.542242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.542250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.949 [2024-10-01 16:48:20.542261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.949 [2024-10-01 16:48:20.542269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.950 [2024-10-01 16:48:20.542910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.950 [2024-10-01 16:48:20.542917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.542926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.542933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.542941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.542948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.542957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.542964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.542976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.542983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.542992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.543255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.543263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2915410 is same with the state(6) to be set 00:23:28.951 [2024-10-01 16:48:20.544441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.951 [2024-10-01 16:48:20.544676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.951 [2024-10-01 16:48:20.544685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.544990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.544997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.952 [2024-10-01 16:48:20.545314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.952 [2024-10-01 16:48:20.545323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.545472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.545480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2917dc0 is same with the state(6) to be set 00:23:28.953 [2024-10-01 16:48:20.546653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.546989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.546996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.953 [2024-10-01 16:48:20.547119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.953 [2024-10-01 16:48:20.547126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.954 [2024-10-01 16:48:20.547679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.954 [2024-10-01 16:48:20.547687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2919200 is same with the state(6) to be set 00:23:28.954 [2024-10-01 16:48:20.549480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.954 [2024-10-01 16:48:20.549497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.954 [2024-10-01 16:48:20.549504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.954 [2024-10-01 16:48:20.549511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:28.954 [2024-10-01 16:48:20.549523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:28.954 [2024-10-01 16:48:20.549552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296f5c0 (9): Bad file descriptor 00:23:28.954 [2024-10-01 16:48:20.549563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:28.954 [2024-10-01 16:48:20.549569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:28.954 [2024-10-01 16:48:20.549577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:28.954 [2024-10-01 16:48:20.549622] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.954 [2024-10-01 16:48:20.549634] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.954 [2024-10-01 16:48:20.549644] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.549657] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.549667] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.549723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:28.955 [2024-10-01 16:48:20.549732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:28.955 task offset: 32768 on job bdev=Nvme7n1 fails 00:23:28.955 00:23:28.955 Latency(us) 00:23:28.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.955 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme1n1 ended in about 0.95 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme1n1 : 0.95 202.56 12.66 67.52 0.00 234487.93 17039.36 224233.94 00:23:28.955 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme2n1 : 0.95 134.72 8.42 67.36 0.00 307713.84 18955.03 267790.18 00:23:28.955 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme3n1 : 0.95 201.61 12.60 67.20 0.00 226825.45 19862.45 230686.72 00:23:28.955 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme4n1 : 0.96 199.95 12.50 66.65 0.00 224479.70 21878.94 222620.75 00:23:28.955 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme5n1 ended in about 0.96 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme5n1 : 0.96 199.49 12.47 66.50 0.00 220616.47 19862.45 227460.33 00:23:28.955 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme6n1 : 0.96 199.04 12.44 66.35 0.00 216788.28 12300.60 230686.72 00:23:28.955 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme7n1 ended in about 0.94 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme7n1 : 0.94 272.46 17.03 68.11 0.00 164854.04 1802.24 227460.33 00:23:28.955 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme8n1 ended in about 0.97 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme8n1 : 0.97 198.58 12.41 66.19 0.00 208554.54 17442.66 254884.63 00:23:28.955 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme9n1 : 0.97 198.13 12.38 66.04 0.00 204722.41 23189.66 208102.01 00:23:28.955 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:28.955 Job: Nvme10n1 ended in about 0.95 seconds with error 00:23:28.955 Verification LBA range: start 0x0 length 0x400 00:23:28.955 Nvme10n1 : 0.95 134.09 8.38 67.05 0.00 262402.89 18249.26 258111.02 00:23:28.955 =================================================================================================================== 00:23:28.955 Total : 1940.64 121.29 668.98 0.00 222577.42 1802.24 267790.18 00:23:28.955 [2024-10-01 16:48:20.572341] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:28.955 [2024-10-01 16:48:20.572371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:28.955 [2024-10-01 16:48:20.572387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.955 [2024-10-01 16:48:20.572679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.572695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250d370 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.572704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250d370 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.572798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.572807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2938700 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.572814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2938700 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.572827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:28.955 [2024-10-01 16:48:20.572834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:28.955 [2024-10-01 16:48:20.572842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:28.955 [2024-10-01 16:48:20.574087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:28.955 [2024-10-01 16:48:20.574099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:28.955 [2024-10-01 16:48:20.574108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.955 [2024-10-01 16:48:20.574355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.574366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2935f60 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.574375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2935f60 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.574692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.574702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242b610 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.574709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242b610 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.574910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.574919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296e1d0 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.574926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296e1d0 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.574937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250d370 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.574948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2938700 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.574977] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.575002] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.575013] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.955 [2024-10-01 16:48:20.575064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.955 [2024-10-01 16:48:20.575168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.575179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fc100 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.575186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fc100 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.575480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.575489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250e3b0 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.575497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250e3b0 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.575506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2935f60 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.575515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242b610 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.575528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296e1d0 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.575537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:28.955 [2024-10-01 16:48:20.575544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:28.955 [2024-10-01 16:48:20.575551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:28.955 [2024-10-01 16:48:20.575562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:28.955 [2024-10-01 16:48:20.575568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:28.955 [2024-10-01 16:48:20.575575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:28.955 [2024-10-01 16:48:20.575635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:28.955 [2024-10-01 16:48:20.575646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:28.955 [2024-10-01 16:48:20.575654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.955 [2024-10-01 16:48:20.575661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.955 [2024-10-01 16:48:20.575851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.955 [2024-10-01 16:48:20.575865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fed60 with addr=10.0.0.2, port=4420 00:23:28.955 [2024-10-01 16:48:20.575873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fed60 is same with the state(6) to be set 00:23:28.955 [2024-10-01 16:48:20.575881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fc100 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.575890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e3b0 (9): Bad file descriptor 00:23:28.955 [2024-10-01 16:48:20.575899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:28.955 [2024-10-01 16:48:20.575905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:28.955 [2024-10-01 16:48:20.575913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:28.955 [2024-10-01 16:48:20.575923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.575930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.575936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:28.956 [2024-10-01 16:48:20.575947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.575953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.575960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:28.956 [2024-10-01 16:48:20.575995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.956 [2024-10-01 16:48:20.576328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x298c060 with addr=10.0.0.2, port=4420 00:23:28.956 [2024-10-01 16:48:20.576335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298c060 is same with the state(6) to be set 00:23:28.956 [2024-10-01 16:48:20.576523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.956 [2024-10-01 16:48:20.576533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296f5c0 with addr=10.0.0.2, port=4420 00:23:28.956 [2024-10-01 16:48:20.576540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296f5c0 is same with the state(6) to be set 00:23:28.956 [2024-10-01 16:48:20.576548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fed60 (9): Bad file descriptor 00:23:28.956 [2024-10-01 16:48:20.576556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.576562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.576569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:28.956 [2024-10-01 16:48:20.576579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.576585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.576592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:28.956 [2024-10-01 16:48:20.576619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298c060 (9): Bad file descriptor 00:23:28.956 [2024-10-01 16:48:20.576642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296f5c0 (9): Bad file descriptor 00:23:28.956 [2024-10-01 16:48:20.576650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.576657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.576664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.956 [2024-10-01 16:48:20.576690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.576704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.576711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:28.956 [2024-10-01 16:48:20.576720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:28.956 [2024-10-01 16:48:20.576727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:28.956 [2024-10-01 16:48:20.576734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:28.956 [2024-10-01 16:48:20.576761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.956 [2024-10-01 16:48:20.576767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.217 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2757985 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2757985 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2757985 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.158 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.158 rmmod nvme_tcp 00:23:30.158 rmmod nvme_fabrics 00:23:30.158 rmmod nvme_keyring 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 2757660 ']' 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 2757660 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2757660 ']' 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2757660 00:23:30.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2757660) - No such process 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2757660 is not found' 00:23:30.419 Process with pid 2757660 is not found 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.419 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.328 00:23:32.328 real 0m6.838s 00:23:32.328 user 0m15.255s 00:23:32.328 sys 0m1.146s 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:32.328 ************************************ 00:23:32.328 END TEST nvmf_shutdown_tc3 00:23:32.328 ************************************ 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.328 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.589 ************************************ 00:23:32.589 START TEST nvmf_shutdown_tc4 00:23:32.589 ************************************ 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.589 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.590 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:23:32.851 00:23:32.851 --- 10.0.0.2 ping statistics --- 00:23:32.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.851 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:32.851 00:23:32.851 --- 10.0.0.1 ping statistics --- 00:23:32.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.851 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=2759034 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 2759034 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2759034 ']' 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.851 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:32.851 [2024-10-01 16:48:24.399375] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:32.851 [2024-10-01 16:48:24.399420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.851 [2024-10-01 16:48:24.454500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.851 [2024-10-01 16:48:24.509478] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.851 [2024-10-01 16:48:24.509511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.851 [2024-10-01 16:48:24.509517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.851 [2024-10-01 16:48:24.509522] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.851 [2024-10-01 16:48:24.509526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.851 [2024-10-01 16:48:24.509630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.851 [2024-10-01 16:48:24.509650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.851 [2024-10-01 16:48:24.509745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.851 [2024-10-01 16:48:24.509747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.112 [2024-10-01 16:48:24.652881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.112 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.113 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.113 Malloc1 00:23:33.113 [2024-10-01 16:48:24.751582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.113 Malloc2 00:23:33.372 Malloc3 00:23:33.372 Malloc4 00:23:33.372 Malloc5 00:23:33.372 Malloc6 00:23:33.372 Malloc7 00:23:33.372 Malloc8 00:23:33.372 Malloc9 00:23:33.632 Malloc10 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2759110 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:33.632 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:33.632 [2024-10-01 16:48:25.216748] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:38.924 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.924 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2759034 00:23:38.924 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2759034 ']' 00:23:38.924 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2759034 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2759034 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2759034' 00:23:38.925 killing process with pid 2759034 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2759034 00:23:38.925 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2759034 00:23:38.925 [2024-10-01 16:48:30.216989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265df30 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265df30 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265e400 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265e400 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265e400 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265e400 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.217899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265da60 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f2b0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f2b0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f2b0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f2b0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265f780 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.218803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265fc50 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 [2024-10-01 16:48:30.219245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265ede0 is same with the state(6) to be set 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 Write completed with error (sct=0, sc=8) 00:23:38.925 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.223375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.223398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.223404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.223414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.223420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e210 is same with Write completed with error (sct=0, sc=8) 00:23:38.926 the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e6e0 is same with Write completed with error (sct=0, sc=8) 00:23:38.926 the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.223631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e6e0 is same with the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.223637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266e6e0 is same with Write completed with error (sct=0, sc=8) 00:23:38.926 the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ebd0 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.223841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ebd0 is same with the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.223847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ebd0 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.223894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.224122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660120 is same with Write completed with error (sct=0, sc=8) 00:23:38.926 the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.224139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660120 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.224145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660120 is same with the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 starting I/O failed: -6 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.224348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.224359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with Write completed with error (sct=0, sc=8) 00:23:38.926 the state(6) to be set 00:23:38.926 [2024-10-01 16:48:30.224365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.224371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.926 Write completed with error (sct=0, sc=8) 00:23:38.926 [2024-10-01 16:48:30.224376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.926 starting I/O failed: -6 00:23:38.926 [2024-10-01 16:48:30.224381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.927 [2024-10-01 16:48:30.224387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with starting I/O failed: -6 00:23:38.927 the state(6) to be set 00:23:38.927 [2024-10-01 16:48:30.224411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f590 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.224611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.224623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 [2024-10-01 16:48:30.224628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.224633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266fa80 is same with the state(6) to be set 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.224741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.927 [2024-10-01 16:48:30.224802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ff50 is same with the state(6) to be set 00:23:38.927 [2024-10-01 16:48:30.224816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ff50 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.225044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f0c0 is same with Write completed with error (sct=0, sc=8) 00:23:38.927 the state(6) to be set 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.225056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f0c0 is same with the state(6) to be set 00:23:38.927 [2024-10-01 16:48:30.225063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f0c0 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 [2024-10-01 16:48:30.225068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f0c0 is same with the state(6) to be set 00:23:38.927 starting I/O failed: -6 00:23:38.927 [2024-10-01 16:48:30.225073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266f0c0 is same with the state(6) to be set 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.927 Write completed with error (sct=0, sc=8) 00:23:38.927 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 [2024-10-01 16:48:30.226169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.928 NVMe io qpair process completion error 00:23:38.928 [2024-10-01 16:48:30.227952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2671c50 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.227977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2671c50 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.227982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2671c50 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.227987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2671c50 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672140 is same with the state(6) to be set 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 [2024-10-01 16:48:30.228416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672610 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672610 is same with Write completed with error (sct=0, sc=8) 00:23:38.928 the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672610 is same with the state(6) to be set 00:23:38.928 [2024-10-01 16:48:30.228444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2672610 is same with the state(6) to be set 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 [2024-10-01 16:48:30.228736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2671780 is same with the state(6) to be set 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 [2024-10-01 16:48:30.229013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.928 NVMe io qpair process completion error 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 starting I/O failed: -6 00:23:38.928 [2024-10-01 16:48:30.230170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.928 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 [2024-10-01 16:48:30.230929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 [2024-10-01 16:48:30.231793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.929 starting I/O failed: -6 00:23:38.929 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 [2024-10-01 16:48:30.232979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.930 NVMe io qpair process completion error 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 starting I/O failed: -6 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.930 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 [2024-10-01 16:48:30.234241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 [2024-10-01 16:48:30.235097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.931 starting I/O failed: -6 00:23:38.931 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 [2024-10-01 16:48:30.235924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 [2024-10-01 16:48:30.238570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.932 NVMe io qpair process completion error 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 starting I/O failed: -6 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.932 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 [2024-10-01 16:48:30.239603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 [2024-10-01 16:48:30.240447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.933 starting I/O failed: -6 00:23:38.933 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 [2024-10-01 16:48:30.241329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 [2024-10-01 16:48:30.242860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.934 NVMe io qpair process completion error 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 starting I/O failed: -6 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.934 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 [2024-10-01 16:48:30.243891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 [2024-10-01 16:48:30.244629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 [2024-10-01 16:48:30.245490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.935 starting I/O failed: -6 00:23:38.935 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 [2024-10-01 16:48:30.247294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.936 NVMe io qpair process completion error 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 [2024-10-01 16:48:30.248544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.936 starting I/O failed: -6 00:23:38.936 starting I/O failed: -6 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.936 Write completed with error (sct=0, sc=8) 00:23:38.936 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 [2024-10-01 16:48:30.249476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 [2024-10-01 16:48:30.250375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.937 starting I/O failed: -6 00:23:38.937 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 [2024-10-01 16:48:30.252487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.938 NVMe io qpair process completion error 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 [2024-10-01 16:48:30.253766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.938 starting I/O failed: -6 00:23:38.938 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 [2024-10-01 16:48:30.254533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 [2024-10-01 16:48:30.255394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.939 Write completed with error (sct=0, sc=8) 00:23:38.939 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 [2024-10-01 16:48:30.256762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.940 NVMe io qpair process completion error 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 [2024-10-01 16:48:30.257931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 Write completed with error (sct=0, sc=8) 00:23:38.940 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 [2024-10-01 16:48:30.258694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 [2024-10-01 16:48:30.259567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.941 starting I/O failed: -6 00:23:38.941 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 [2024-10-01 16:48:30.262213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.942 NVMe io qpair process completion error 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 [2024-10-01 16:48:30.263483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 [2024-10-01 16:48:30.264246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.942 starting I/O failed: -6 00:23:38.942 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 [2024-10-01 16:48:30.265121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.943 starting I/O failed: -6 00:23:38.943 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 starting I/O failed: -6 00:23:38.944 [2024-10-01 16:48:30.266889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:38.944 NVMe io qpair process completion error 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 Write completed with error (sct=0, sc=8) 00:23:38.944 [2024-10-01 16:48:30.269167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:38.944 NVMe io qpair process completion error 00:23:38.944 Initializing NVMe Controllers 00:23:38.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.944 Controller IO queue size 128, less than required. 00:23:38.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:38.944 Controller IO queue size 128, less than required. 00:23:38.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:38.945 Controller IO queue size 128, less than required. 00:23:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:38.945 Initialization complete. Launching workers. 00:23:38.945 ======================================================== 00:23:38.945 Latency(us) 00:23:38.945 Device Information : IOPS MiB/s Average min max 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2004.01 86.11 63892.00 566.88 111668.11 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1991.00 85.55 64339.12 653.24 113141.15 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2065.66 88.76 62031.70 792.08 112992.12 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2058.84 88.47 62272.10 612.88 121355.64 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1995.05 85.72 63672.00 841.82 111269.57 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1957.72 84.12 65408.00 951.68 113894.23 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2041.98 87.74 62225.89 790.83 113962.63 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2015.74 86.61 63052.79 859.40 113345.78 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2014.89 86.58 63113.31 798.71 114330.32 00:23:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2005.08 86.16 63441.43 785.35 115080.05 00:23:38.945 ======================================================== 00:23:38.945 Total : 20149.98 865.82 63329.97 566.88 121355.64 00:23:38.945 00:23:38.945 [2024-10-01 16:48:30.271813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20027f0 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.271854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20029d0 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.271884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000c90 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.271912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000fc0 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.271948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000630 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.271982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20074c0 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.272011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000960 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.272042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2006e60 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.272068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002bb0 is same with the state(6) to be set 00:23:38.945 [2024-10-01 16:48:30.272094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2007190 is same with the state(6) to be set 00:23:38.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:38.945 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2759110 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2759110 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2759110 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:39.888 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.889 rmmod nvme_tcp 00:23:39.889 rmmod nvme_fabrics 00:23:39.889 rmmod nvme_keyring 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 2759034 ']' 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 2759034 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2759034 ']' 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2759034 00:23:39.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2759034) - No such process 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2759034 is not found' 00:23:39.889 Process with pid 2759034 is not found 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.889 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.433 00:23:42.433 real 0m9.579s 00:23:42.433 user 0m25.400s 00:23:42.433 sys 0m3.843s 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:42.433 ************************************ 00:23:42.433 END TEST nvmf_shutdown_tc4 00:23:42.433 ************************************ 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:42.433 00:23:42.433 real 0m39.944s 00:23:42.433 user 1m33.358s 00:23:42.433 sys 0m12.934s 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.433 ************************************ 00:23:42.433 END TEST nvmf_shutdown 00:23:42.433 ************************************ 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:42.433 00:23:42.433 real 12m57.139s 00:23:42.433 user 27m50.706s 00:23:42.433 sys 3m39.457s 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.433 16:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:42.433 ************************************ 00:23:42.433 END TEST nvmf_target_extra 00:23:42.433 ************************************ 00:23:42.433 16:48:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:42.433 16:48:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:42.433 16:48:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.433 16:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.433 ************************************ 00:23:42.433 START TEST nvmf_host 00:23:42.433 ************************************ 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:42.433 * Looking for test storage... 00:23:42.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.433 --rc genhtml_branch_coverage=1 00:23:42.433 --rc genhtml_function_coverage=1 00:23:42.433 --rc genhtml_legend=1 00:23:42.433 --rc geninfo_all_blocks=1 00:23:42.433 --rc geninfo_unexecuted_blocks=1 00:23:42.433 00:23:42.433 ' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.433 --rc genhtml_branch_coverage=1 00:23:42.433 --rc genhtml_function_coverage=1 00:23:42.433 --rc genhtml_legend=1 00:23:42.433 --rc geninfo_all_blocks=1 00:23:42.433 --rc geninfo_unexecuted_blocks=1 00:23:42.433 00:23:42.433 ' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.433 --rc genhtml_branch_coverage=1 00:23:42.433 --rc genhtml_function_coverage=1 00:23:42.433 --rc genhtml_legend=1 00:23:42.433 --rc geninfo_all_blocks=1 00:23:42.433 --rc geninfo_unexecuted_blocks=1 00:23:42.433 00:23:42.433 ' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:42.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.433 --rc genhtml_branch_coverage=1 00:23:42.433 --rc genhtml_function_coverage=1 00:23:42.433 --rc genhtml_legend=1 00:23:42.433 --rc geninfo_all_blocks=1 00:23:42.433 --rc geninfo_unexecuted_blocks=1 00:23:42.433 00:23:42.433 ' 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.433 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.434 16:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.434 ************************************ 00:23:42.434 START TEST nvmf_multicontroller 00:23:42.434 ************************************ 00:23:42.434 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:42.696 * Looking for test storage... 00:23:42.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.696 --rc genhtml_branch_coverage=1 00:23:42.696 --rc genhtml_function_coverage=1 00:23:42.696 --rc genhtml_legend=1 00:23:42.696 --rc geninfo_all_blocks=1 00:23:42.696 --rc geninfo_unexecuted_blocks=1 00:23:42.696 00:23:42.696 ' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.696 --rc genhtml_branch_coverage=1 00:23:42.696 --rc genhtml_function_coverage=1 00:23:42.696 --rc genhtml_legend=1 00:23:42.696 --rc geninfo_all_blocks=1 00:23:42.696 --rc geninfo_unexecuted_blocks=1 00:23:42.696 00:23:42.696 ' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.696 --rc genhtml_branch_coverage=1 00:23:42.696 --rc genhtml_function_coverage=1 00:23:42.696 --rc genhtml_legend=1 00:23:42.696 --rc geninfo_all_blocks=1 00:23:42.696 --rc geninfo_unexecuted_blocks=1 00:23:42.696 00:23:42.696 ' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.696 --rc genhtml_branch_coverage=1 00:23:42.696 --rc genhtml_function_coverage=1 00:23:42.696 --rc genhtml_legend=1 00:23:42.696 --rc geninfo_all_blocks=1 00:23:42.696 --rc geninfo_unexecuted_blocks=1 00:23:42.696 00:23:42.696 ' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:42.696 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.697 16:48:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:50.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:50.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:50.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:50.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.831 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:23:50.832 00:23:50.832 --- 10.0.0.2 ping statistics --- 00:23:50.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.832 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:23:50.832 00:23:50.832 --- 10.0.0.1 ping statistics --- 00:23:50.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.832 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2764273 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2764273 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2764273 ']' 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 [2024-10-01 16:48:41.646639] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:50.832 [2024-10-01 16:48:41.646689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.832 [2024-10-01 16:48:41.699266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.832 [2024-10-01 16:48:41.754926] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.832 [2024-10-01 16:48:41.754960] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.832 [2024-10-01 16:48:41.754966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.832 [2024-10-01 16:48:41.754974] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.832 [2024-10-01 16:48:41.754979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.832 [2024-10-01 16:48:41.755098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.832 [2024-10-01 16:48:41.755324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.832 [2024-10-01 16:48:41.755327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 [2024-10-01 16:48:41.880896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 Malloc0 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 [2024-10-01 16:48:41.943322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 [2024-10-01 16:48:41.955269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 Malloc1 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2764315 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.832 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2764315 /var/tmp/bdevperf.sock 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2764315 ']' 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.833 NVMe0n1 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.833 1 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:50.833 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.094 request: 00:23:51.094 { 00:23:51.094 "name": "NVMe0", 00:23:51.094 "trtype": "tcp", 00:23:51.094 "traddr": "10.0.0.2", 00:23:51.094 "adrfam": "ipv4", 00:23:51.094 "trsvcid": "4420", 00:23:51.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.094 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:51.094 "hostaddr": "10.0.0.1", 00:23:51.094 "prchk_reftag": false, 00:23:51.094 "prchk_guard": false, 00:23:51.094 "hdgst": false, 00:23:51.094 "ddgst": false, 00:23:51.094 "allow_unrecognized_csi": false, 00:23:51.094 "method": "bdev_nvme_attach_controller", 00:23:51.094 "req_id": 1 00:23:51.094 } 00:23:51.094 Got JSON-RPC error response 00:23:51.094 response: 00:23:51.094 { 00:23:51.094 "code": -114, 00:23:51.094 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:51.094 } 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.094 request: 00:23:51.094 { 00:23:51.094 "name": "NVMe0", 00:23:51.094 "trtype": "tcp", 00:23:51.094 "traddr": "10.0.0.2", 00:23:51.094 "adrfam": "ipv4", 00:23:51.094 "trsvcid": "4420", 00:23:51.094 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.094 "hostaddr": "10.0.0.1", 00:23:51.094 "prchk_reftag": false, 00:23:51.094 "prchk_guard": false, 00:23:51.094 "hdgst": false, 00:23:51.094 "ddgst": false, 00:23:51.094 "allow_unrecognized_csi": false, 00:23:51.094 "method": "bdev_nvme_attach_controller", 00:23:51.094 "req_id": 1 00:23:51.094 } 00:23:51.094 Got JSON-RPC error response 00:23:51.094 response: 00:23:51.094 { 00:23:51.094 "code": -114, 00:23:51.094 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:51.094 } 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.094 request: 00:23:51.094 { 00:23:51.094 "name": "NVMe0", 00:23:51.094 "trtype": "tcp", 00:23:51.094 "traddr": "10.0.0.2", 00:23:51.094 "adrfam": "ipv4", 00:23:51.094 "trsvcid": "4420", 00:23:51.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.094 "hostaddr": "10.0.0.1", 00:23:51.094 "prchk_reftag": false, 00:23:51.094 "prchk_guard": false, 00:23:51.094 "hdgst": false, 00:23:51.094 "ddgst": false, 00:23:51.094 "multipath": "disable", 00:23:51.094 "allow_unrecognized_csi": false, 00:23:51.094 "method": "bdev_nvme_attach_controller", 00:23:51.094 "req_id": 1 00:23:51.094 } 00:23:51.094 Got JSON-RPC error response 00:23:51.094 response: 00:23:51.094 { 00:23:51.094 "code": -114, 00:23:51.094 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:51.094 } 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.094 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.095 request: 00:23:51.095 { 00:23:51.095 "name": "NVMe0", 00:23:51.095 "trtype": "tcp", 00:23:51.095 "traddr": "10.0.0.2", 00:23:51.095 "adrfam": "ipv4", 00:23:51.095 "trsvcid": "4420", 00:23:51.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.095 "hostaddr": "10.0.0.1", 00:23:51.095 "prchk_reftag": false, 00:23:51.095 "prchk_guard": false, 00:23:51.095 "hdgst": false, 00:23:51.095 "ddgst": false, 00:23:51.095 "multipath": "failover", 00:23:51.095 "allow_unrecognized_csi": false, 00:23:51.095 "method": "bdev_nvme_attach_controller", 00:23:51.095 "req_id": 1 00:23:51.095 } 00:23:51.095 Got JSON-RPC error response 00:23:51.095 response: 00:23:51.095 { 00:23:51.095 "code": -114, 00:23:51.095 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:51.095 } 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.095 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.095 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.354 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:51.354 16:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.293 { 00:23:52.293 "results": [ 00:23:52.293 { 00:23:52.293 "job": "NVMe0n1", 00:23:52.293 "core_mask": "0x1", 00:23:52.293 "workload": "write", 00:23:52.293 "status": "finished", 00:23:52.293 "queue_depth": 128, 00:23:52.293 "io_size": 4096, 00:23:52.293 "runtime": 1.006652, 00:23:52.293 "iops": 28273.92187170939, 00:23:52.293 "mibps": 110.4450073113648, 00:23:52.293 "io_failed": 0, 00:23:52.293 "io_timeout": 0, 00:23:52.293 "avg_latency_us": 4518.1785013216, 00:23:52.293 "min_latency_us": 1903.0646153846153, 00:23:52.293 "max_latency_us": 7360.196923076923 00:23:52.293 } 00:23:52.293 ], 00:23:52.293 "core_count": 1 00:23:52.293 } 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2764315 00:23:52.293 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2764315 ']' 00:23:52.294 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2764315 00:23:52.294 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:52.294 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.294 16:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2764315 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2764315' 00:23:52.554 killing process with pid 2764315 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2764315 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2764315 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:52.554 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.554 [2024-10-01 16:48:42.075045] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:23:52.554 [2024-10-01 16:48:42.075097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764315 ] 00:23:52.554 [2024-10-01 16:48:42.151001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.554 [2024-10-01 16:48:42.213030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.554 [2024-10-01 16:48:42.782260] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name b8e8c7cf-9816-41f3-8858-acbef408bacf already exists 00:23:52.554 [2024-10-01 16:48:42.782289] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:b8e8c7cf-9816-41f3-8858-acbef408bacf alias for bdev NVMe1n1 00:23:52.554 [2024-10-01 16:48:42.782297] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:52.554 Running I/O for 1 seconds... 00:23:52.554 28240.00 IOPS, 110.31 MiB/s 00:23:52.554 Latency(us) 00:23:52.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.554 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:52.554 NVMe0n1 : 1.01 28273.92 110.45 0.00 0.00 4518.18 1903.06 7360.20 00:23:52.554 =================================================================================================================== 00:23:52.554 Total : 28273.92 110.45 0.00 0.00 4518.18 1903.06 7360.20 00:23:52.554 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.554 00:23:52.554 Latency(us) 00:23:52.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.554 =================================================================================================================== 00:23:52.554 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.554 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.554 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.554 rmmod nvme_tcp 00:23:52.554 rmmod nvme_fabrics 00:23:52.554 rmmod nvme_keyring 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2764273 ']' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2764273 ']' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2764273' 00:23:52.815 killing process with pid 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2764273 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.815 16:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.357 00:23:55.357 real 0m12.503s 00:23:55.357 user 0m12.613s 00:23:55.357 sys 0m6.087s 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.357 ************************************ 00:23:55.357 END TEST nvmf_multicontroller 00:23:55.357 ************************************ 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.357 ************************************ 00:23:55.357 START TEST nvmf_aer 00:23:55.357 ************************************ 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:55.357 * Looking for test storage... 00:23:55.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:55.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.357 --rc genhtml_branch_coverage=1 00:23:55.357 --rc genhtml_function_coverage=1 00:23:55.357 --rc genhtml_legend=1 00:23:55.357 --rc geninfo_all_blocks=1 00:23:55.357 --rc geninfo_unexecuted_blocks=1 00:23:55.357 00:23:55.357 ' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:55.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.357 --rc genhtml_branch_coverage=1 00:23:55.357 --rc genhtml_function_coverage=1 00:23:55.357 --rc genhtml_legend=1 00:23:55.357 --rc geninfo_all_blocks=1 00:23:55.357 --rc geninfo_unexecuted_blocks=1 00:23:55.357 00:23:55.357 ' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:55.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.357 --rc genhtml_branch_coverage=1 00:23:55.357 --rc genhtml_function_coverage=1 00:23:55.357 --rc genhtml_legend=1 00:23:55.357 --rc geninfo_all_blocks=1 00:23:55.357 --rc geninfo_unexecuted_blocks=1 00:23:55.357 00:23:55.357 ' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:55.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.357 --rc genhtml_branch_coverage=1 00:23:55.357 --rc genhtml_function_coverage=1 00:23:55.357 --rc genhtml_legend=1 00:23:55.357 --rc geninfo_all_blocks=1 00:23:55.357 --rc geninfo_unexecuted_blocks=1 00:23:55.357 00:23:55.357 ' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.357 16:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.488 16:48:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:24:03.488 00:24:03.488 --- 10.0.0.2 ping statistics --- 00:24:03.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.488 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:03.488 00:24:03.488 --- 10.0.0.1 ping statistics --- 00:24:03.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.488 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:03.488 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2768814 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2768814 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2768814 ']' 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.489 16:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 [2024-10-01 16:48:54.168870] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:03.489 [2024-10-01 16:48:54.168933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.489 [2024-10-01 16:48:54.254838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.489 [2024-10-01 16:48:54.329373] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.489 [2024-10-01 16:48:54.329411] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.489 [2024-10-01 16:48:54.329420] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.489 [2024-10-01 16:48:54.329427] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.489 [2024-10-01 16:48:54.329432] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.489 [2024-10-01 16:48:54.329535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.489 [2024-10-01 16:48:54.329651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.489 [2024-10-01 16:48:54.329775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.489 [2024-10-01 16:48:54.329778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 [2024-10-01 16:48:55.105011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 Malloc0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 [2024-10-01 16:48:55.144473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.489 [ 00:24:03.489 { 00:24:03.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.489 "subtype": "Discovery", 00:24:03.489 "listen_addresses": [], 00:24:03.489 "allow_any_host": true, 00:24:03.489 "hosts": [] 00:24:03.489 }, 00:24:03.489 { 00:24:03.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.489 "subtype": "NVMe", 00:24:03.489 "listen_addresses": [ 00:24:03.489 { 00:24:03.489 "trtype": "TCP", 00:24:03.489 "adrfam": "IPv4", 00:24:03.489 "traddr": "10.0.0.2", 00:24:03.489 "trsvcid": "4420" 00:24:03.489 } 00:24:03.489 ], 00:24:03.489 "allow_any_host": true, 00:24:03.489 "hosts": [], 00:24:03.489 "serial_number": "SPDK00000000000001", 00:24:03.489 "model_number": "SPDK bdev Controller", 00:24:03.489 "max_namespaces": 2, 00:24:03.489 "min_cntlid": 1, 00:24:03.489 "max_cntlid": 65519, 00:24:03.489 "namespaces": [ 00:24:03.489 { 00:24:03.489 "nsid": 1, 00:24:03.489 "bdev_name": "Malloc0", 00:24:03.489 "name": "Malloc0", 00:24:03.489 "nguid": "C05CA3A7B56743FA8CDAD81CF66A3919", 00:24:03.489 "uuid": "c05ca3a7-b567-43fa-8cda-d81cf66a3919" 00:24:03.489 } 00:24:03.489 ] 00:24:03.489 } 00:24:03.489 ] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2768870 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:03.489 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.750 Malloc1 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.750 [ 00:24:03.750 { 00:24:03.750 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.750 "subtype": "Discovery", 00:24:03.750 "listen_addresses": [], 00:24:03.750 "allow_any_host": true, 00:24:03.750 "hosts": [] 00:24:03.750 }, 00:24:03.750 { 00:24:03.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.750 "subtype": "NVMe", 00:24:03.750 "listen_addresses": [ 00:24:03.750 { 00:24:03.750 "trtype": "TCP", 00:24:03.750 "adrfam": "IPv4", 00:24:03.750 "traddr": "10.0.0.2", 00:24:03.750 "trsvcid": "4420" 00:24:03.750 } 00:24:03.750 ], 00:24:03.750 "allow_any_host": true, 00:24:03.750 "hosts": [], 00:24:03.750 "serial_number": "SPDK00000000000001", 00:24:03.750 "model_number": "SPDK bdev Controller", 00:24:03.750 "max_namespaces": 2, 00:24:03.750 "min_cntlid": 1, 00:24:03.750 "max_cntlid": 65519, 00:24:03.750 "namespaces": [ 00:24:03.750 { 00:24:03.750 "nsid": 1, 00:24:03.750 "bdev_name": "Malloc0", 00:24:03.750 "name": "Malloc0", 00:24:03.750 "nguid": "C05CA3A7B56743FA8CDAD81CF66A3919", 00:24:03.750 "uuid": "c05ca3a7-b567-43fa-8cda-d81cf66a3919" 00:24:03.750 }, 00:24:03.750 { 00:24:03.750 "nsid": 2, 00:24:03.750 "bdev_name": "Malloc1", 00:24:03.750 "name": "Malloc1", 00:24:03.750 "nguid": "B837F18D25194F75B639FD8DC506831C", 00:24:03.750 "uuid": "b837f18d-2519-4f75-b639-fd8dc506831c" 00:24:03.750 } 00:24:03.750 ] 00:24:03.750 } 00:24:03.750 ] 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.750 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2768870 00:24:03.750 Asynchronous Event Request test 00:24:03.750 Attaching to 10.0.0.2 00:24:03.750 Attached to 10.0.0.2 00:24:03.750 Registering asynchronous event callbacks... 00:24:03.750 Starting namespace attribute notice tests for all controllers... 00:24:03.750 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:03.750 aer_cb - Changed Namespace 00:24:03.750 Cleaning up... 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.010 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.010 rmmod nvme_tcp 00:24:04.010 rmmod nvme_fabrics 00:24:04.011 rmmod nvme_keyring 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2768814 ']' 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2768814 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2768814 ']' 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2768814 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2768814 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2768814' 00:24:04.011 killing process with pid 2768814 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2768814 00:24:04.011 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2768814 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.271 16:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.179 16:48:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.179 00:24:06.179 real 0m11.202s 00:24:06.179 user 0m7.798s 00:24:06.179 sys 0m5.995s 00:24:06.179 16:48:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.179 16:48:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:06.179 ************************************ 00:24:06.179 END TEST nvmf_aer 00:24:06.179 ************************************ 00:24:06.440 16:48:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:06.440 16:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.440 16:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.440 16:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.440 ************************************ 00:24:06.440 START TEST nvmf_async_init 00:24:06.440 ************************************ 00:24:06.440 16:48:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:06.440 * Looking for test storage... 00:24:06.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.440 --rc genhtml_branch_coverage=1 00:24:06.440 --rc genhtml_function_coverage=1 00:24:06.440 --rc genhtml_legend=1 00:24:06.440 --rc geninfo_all_blocks=1 00:24:06.440 --rc geninfo_unexecuted_blocks=1 00:24:06.440 00:24:06.440 ' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.440 --rc genhtml_branch_coverage=1 00:24:06.440 --rc genhtml_function_coverage=1 00:24:06.440 --rc genhtml_legend=1 00:24:06.440 --rc geninfo_all_blocks=1 00:24:06.440 --rc geninfo_unexecuted_blocks=1 00:24:06.440 00:24:06.440 ' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.440 --rc genhtml_branch_coverage=1 00:24:06.440 --rc genhtml_function_coverage=1 00:24:06.440 --rc genhtml_legend=1 00:24:06.440 --rc geninfo_all_blocks=1 00:24:06.440 --rc geninfo_unexecuted_blocks=1 00:24:06.440 00:24:06.440 ' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.440 --rc genhtml_branch_coverage=1 00:24:06.440 --rc genhtml_function_coverage=1 00:24:06.440 --rc genhtml_legend=1 00:24:06.440 --rc geninfo_all_blocks=1 00:24:06.440 --rc geninfo_unexecuted_blocks=1 00:24:06.440 00:24:06.440 ' 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.440 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.701 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=132e15dfd61a43b6a5c50309249c51e1 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.702 16:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.833 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.833 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.833 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.833 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.833 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:24:14.834 00:24:14.834 --- 10.0.0.2 ping statistics --- 00:24:14.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.834 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:14.834 00:24:14.834 --- 10.0.0.1 ping statistics --- 00:24:14.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.834 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2773056 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2773056 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2773056 ']' 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.834 16:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 [2024-10-01 16:49:05.484936] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:14.834 [2024-10-01 16:49:05.485009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.834 [2024-10-01 16:49:05.573566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.834 [2024-10-01 16:49:05.665230] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.834 [2024-10-01 16:49:05.665294] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.834 [2024-10-01 16:49:05.665302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.834 [2024-10-01 16:49:05.665308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.834 [2024-10-01 16:49:05.665315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.834 [2024-10-01 16:49:05.665344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 [2024-10-01 16:49:06.425390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 null0 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 132e15dfd61a43b6a5c50309249c51e1 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.834 [2024-10-01 16:49:06.485785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.834 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.095 nvme0n1 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.095 [ 00:24:15.095 { 00:24:15.095 "name": "nvme0n1", 00:24:15.095 "aliases": [ 00:24:15.095 "132e15df-d61a-43b6-a5c5-0309249c51e1" 00:24:15.095 ], 00:24:15.095 "product_name": "NVMe disk", 00:24:15.095 "block_size": 512, 00:24:15.095 "num_blocks": 2097152, 00:24:15.095 "uuid": "132e15df-d61a-43b6-a5c5-0309249c51e1", 00:24:15.095 "numa_id": 0, 00:24:15.095 "assigned_rate_limits": { 00:24:15.095 "rw_ios_per_sec": 0, 00:24:15.095 "rw_mbytes_per_sec": 0, 00:24:15.095 "r_mbytes_per_sec": 0, 00:24:15.095 "w_mbytes_per_sec": 0 00:24:15.095 }, 00:24:15.095 "claimed": false, 00:24:15.095 "zoned": false, 00:24:15.095 "supported_io_types": { 00:24:15.095 "read": true, 00:24:15.095 "write": true, 00:24:15.095 "unmap": false, 00:24:15.095 "flush": true, 00:24:15.095 "reset": true, 00:24:15.095 "nvme_admin": true, 00:24:15.095 "nvme_io": true, 00:24:15.095 "nvme_io_md": false, 00:24:15.095 "write_zeroes": true, 00:24:15.095 "zcopy": false, 00:24:15.095 "get_zone_info": false, 00:24:15.095 "zone_management": false, 00:24:15.095 "zone_append": false, 00:24:15.095 "compare": true, 00:24:15.095 "compare_and_write": true, 00:24:15.095 "abort": true, 00:24:15.095 "seek_hole": false, 00:24:15.095 "seek_data": false, 00:24:15.095 "copy": true, 00:24:15.095 "nvme_iov_md": false 00:24:15.095 }, 00:24:15.095 "memory_domains": [ 00:24:15.095 { 00:24:15.095 "dma_device_id": "system", 00:24:15.095 "dma_device_type": 1 00:24:15.095 } 00:24:15.095 ], 00:24:15.095 "driver_specific": { 00:24:15.095 "nvme": [ 00:24:15.095 { 00:24:15.095 "trid": { 00:24:15.095 "trtype": "TCP", 00:24:15.095 "adrfam": "IPv4", 00:24:15.095 "traddr": "10.0.0.2", 00:24:15.095 "trsvcid": "4420", 00:24:15.095 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.095 }, 00:24:15.095 "ctrlr_data": { 00:24:15.095 "cntlid": 1, 00:24:15.095 "vendor_id": "0x8086", 00:24:15.095 "model_number": "SPDK bdev Controller", 00:24:15.095 "serial_number": "00000000000000000000", 00:24:15.095 "firmware_revision": "25.01", 00:24:15.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.095 "oacs": { 00:24:15.095 "security": 0, 00:24:15.095 "format": 0, 00:24:15.095 "firmware": 0, 00:24:15.095 "ns_manage": 0 00:24:15.095 }, 00:24:15.095 "multi_ctrlr": true, 00:24:15.095 "ana_reporting": false 00:24:15.095 }, 00:24:15.095 "vs": { 00:24:15.095 "nvme_version": "1.3" 00:24:15.095 }, 00:24:15.095 "ns_data": { 00:24:15.095 "id": 1, 00:24:15.095 "can_share": true 00:24:15.095 } 00:24:15.095 } 00:24:15.095 ], 00:24:15.095 "mp_policy": "active_passive" 00:24:15.095 } 00:24:15.095 } 00:24:15.095 ] 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.095 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.095 [2024-10-01 16:49:06.763512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:15.095 [2024-10-01 16:49:06.763610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4c9e0 (9): Bad file descriptor 00:24:15.356 [2024-10-01 16:49:06.896077] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 [ 00:24:15.356 { 00:24:15.356 "name": "nvme0n1", 00:24:15.356 "aliases": [ 00:24:15.356 "132e15df-d61a-43b6-a5c5-0309249c51e1" 00:24:15.356 ], 00:24:15.356 "product_name": "NVMe disk", 00:24:15.356 "block_size": 512, 00:24:15.356 "num_blocks": 2097152, 00:24:15.356 "uuid": "132e15df-d61a-43b6-a5c5-0309249c51e1", 00:24:15.356 "numa_id": 0, 00:24:15.356 "assigned_rate_limits": { 00:24:15.356 "rw_ios_per_sec": 0, 00:24:15.356 "rw_mbytes_per_sec": 0, 00:24:15.356 "r_mbytes_per_sec": 0, 00:24:15.356 "w_mbytes_per_sec": 0 00:24:15.356 }, 00:24:15.356 "claimed": false, 00:24:15.356 "zoned": false, 00:24:15.356 "supported_io_types": { 00:24:15.356 "read": true, 00:24:15.356 "write": true, 00:24:15.356 "unmap": false, 00:24:15.356 "flush": true, 00:24:15.356 "reset": true, 00:24:15.356 "nvme_admin": true, 00:24:15.356 "nvme_io": true, 00:24:15.356 "nvme_io_md": false, 00:24:15.356 "write_zeroes": true, 00:24:15.356 "zcopy": false, 00:24:15.356 "get_zone_info": false, 00:24:15.356 "zone_management": false, 00:24:15.356 "zone_append": false, 00:24:15.356 "compare": true, 00:24:15.356 "compare_and_write": true, 00:24:15.356 "abort": true, 00:24:15.356 "seek_hole": false, 00:24:15.356 "seek_data": false, 00:24:15.356 "copy": true, 00:24:15.356 "nvme_iov_md": false 00:24:15.356 }, 00:24:15.356 "memory_domains": [ 00:24:15.356 { 00:24:15.356 "dma_device_id": "system", 00:24:15.356 "dma_device_type": 1 00:24:15.356 } 00:24:15.356 ], 00:24:15.356 "driver_specific": { 00:24:15.356 "nvme": [ 00:24:15.356 { 00:24:15.356 "trid": { 00:24:15.356 "trtype": "TCP", 00:24:15.356 "adrfam": "IPv4", 00:24:15.356 "traddr": "10.0.0.2", 00:24:15.356 "trsvcid": "4420", 00:24:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.356 }, 00:24:15.356 "ctrlr_data": { 00:24:15.356 "cntlid": 2, 00:24:15.356 "vendor_id": "0x8086", 00:24:15.356 "model_number": "SPDK bdev Controller", 00:24:15.356 "serial_number": "00000000000000000000", 00:24:15.356 "firmware_revision": "25.01", 00:24:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.356 "oacs": { 00:24:15.356 "security": 0, 00:24:15.356 "format": 0, 00:24:15.356 "firmware": 0, 00:24:15.356 "ns_manage": 0 00:24:15.356 }, 00:24:15.356 "multi_ctrlr": true, 00:24:15.356 "ana_reporting": false 00:24:15.356 }, 00:24:15.356 "vs": { 00:24:15.356 "nvme_version": "1.3" 00:24:15.356 }, 00:24:15.356 "ns_data": { 00:24:15.356 "id": 1, 00:24:15.356 "can_share": true 00:24:15.356 } 00:24:15.356 } 00:24:15.356 ], 00:24:15.356 "mp_policy": "active_passive" 00:24:15.356 } 00:24:15.356 } 00:24:15.356 ] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NrEtIHEGbt 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NrEtIHEGbt 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NrEtIHEGbt 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 [2024-10-01 16:49:06.988255] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.356 [2024-10-01 16:49:06.988418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.356 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.356 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.356 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.356 [2024-10-01 16:49:07.012331] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.617 nvme0n1 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.617 [ 00:24:15.617 { 00:24:15.617 "name": "nvme0n1", 00:24:15.617 "aliases": [ 00:24:15.617 "132e15df-d61a-43b6-a5c5-0309249c51e1" 00:24:15.617 ], 00:24:15.617 "product_name": "NVMe disk", 00:24:15.617 "block_size": 512, 00:24:15.617 "num_blocks": 2097152, 00:24:15.617 "uuid": "132e15df-d61a-43b6-a5c5-0309249c51e1", 00:24:15.617 "numa_id": 0, 00:24:15.617 "assigned_rate_limits": { 00:24:15.617 "rw_ios_per_sec": 0, 00:24:15.617 "rw_mbytes_per_sec": 0, 00:24:15.617 "r_mbytes_per_sec": 0, 00:24:15.617 "w_mbytes_per_sec": 0 00:24:15.617 }, 00:24:15.617 "claimed": false, 00:24:15.617 "zoned": false, 00:24:15.617 "supported_io_types": { 00:24:15.617 "read": true, 00:24:15.617 "write": true, 00:24:15.617 "unmap": false, 00:24:15.617 "flush": true, 00:24:15.617 "reset": true, 00:24:15.617 "nvme_admin": true, 00:24:15.617 "nvme_io": true, 00:24:15.617 "nvme_io_md": false, 00:24:15.617 "write_zeroes": true, 00:24:15.617 "zcopy": false, 00:24:15.617 "get_zone_info": false, 00:24:15.617 "zone_management": false, 00:24:15.617 "zone_append": false, 00:24:15.617 "compare": true, 00:24:15.617 "compare_and_write": true, 00:24:15.617 "abort": true, 00:24:15.617 "seek_hole": false, 00:24:15.617 "seek_data": false, 00:24:15.617 "copy": true, 00:24:15.617 "nvme_iov_md": false 00:24:15.617 }, 00:24:15.617 "memory_domains": [ 00:24:15.617 { 00:24:15.617 "dma_device_id": "system", 00:24:15.617 "dma_device_type": 1 00:24:15.617 } 00:24:15.617 ], 00:24:15.617 "driver_specific": { 00:24:15.617 "nvme": [ 00:24:15.617 { 00:24:15.617 "trid": { 00:24:15.617 "trtype": "TCP", 00:24:15.617 "adrfam": "IPv4", 00:24:15.617 "traddr": "10.0.0.2", 00:24:15.617 "trsvcid": "4421", 00:24:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:15.617 }, 00:24:15.617 "ctrlr_data": { 00:24:15.617 "cntlid": 3, 00:24:15.617 "vendor_id": "0x8086", 00:24:15.617 "model_number": "SPDK bdev Controller", 00:24:15.617 "serial_number": "00000000000000000000", 00:24:15.617 "firmware_revision": "25.01", 00:24:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.617 "oacs": { 00:24:15.617 "security": 0, 00:24:15.617 "format": 0, 00:24:15.617 "firmware": 0, 00:24:15.617 "ns_manage": 0 00:24:15.617 }, 00:24:15.617 "multi_ctrlr": true, 00:24:15.617 "ana_reporting": false 00:24:15.617 }, 00:24:15.617 "vs": { 00:24:15.617 "nvme_version": "1.3" 00:24:15.617 }, 00:24:15.617 "ns_data": { 00:24:15.617 "id": 1, 00:24:15.617 "can_share": true 00:24:15.617 } 00:24:15.617 } 00:24:15.617 ], 00:24:15.617 "mp_policy": "active_passive" 00:24:15.617 } 00:24:15.617 } 00:24:15.617 ] 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NrEtIHEGbt 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.617 rmmod nvme_tcp 00:24:15.617 rmmod nvme_fabrics 00:24:15.617 rmmod nvme_keyring 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2773056 ']' 00:24:15.617 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2773056 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2773056 ']' 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2773056 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2773056 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2773056' 00:24:15.618 killing process with pid 2773056 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2773056 00:24:15.618 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2773056 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.877 16:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.422 00:24:18.422 real 0m11.604s 00:24:18.422 user 0m4.411s 00:24:18.422 sys 0m5.832s 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:18.422 ************************************ 00:24:18.422 END TEST nvmf_async_init 00:24:18.422 ************************************ 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.422 ************************************ 00:24:18.422 START TEST dma 00:24:18.422 ************************************ 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:18.422 * Looking for test storage... 00:24:18.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.422 --rc genhtml_branch_coverage=1 00:24:18.422 --rc genhtml_function_coverage=1 00:24:18.422 --rc genhtml_legend=1 00:24:18.422 --rc geninfo_all_blocks=1 00:24:18.422 --rc geninfo_unexecuted_blocks=1 00:24:18.422 00:24:18.422 ' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.422 --rc genhtml_branch_coverage=1 00:24:18.422 --rc genhtml_function_coverage=1 00:24:18.422 --rc genhtml_legend=1 00:24:18.422 --rc geninfo_all_blocks=1 00:24:18.422 --rc geninfo_unexecuted_blocks=1 00:24:18.422 00:24:18.422 ' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.422 --rc genhtml_branch_coverage=1 00:24:18.422 --rc genhtml_function_coverage=1 00:24:18.422 --rc genhtml_legend=1 00:24:18.422 --rc geninfo_all_blocks=1 00:24:18.422 --rc geninfo_unexecuted_blocks=1 00:24:18.422 00:24:18.422 ' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.422 --rc genhtml_branch_coverage=1 00:24:18.422 --rc genhtml_function_coverage=1 00:24:18.422 --rc genhtml_legend=1 00:24:18.422 --rc geninfo_all_blocks=1 00:24:18.422 --rc geninfo_unexecuted_blocks=1 00:24:18.422 00:24:18.422 ' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.422 16:49:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:18.423 00:24:18.423 real 0m0.208s 00:24:18.423 user 0m0.114s 00:24:18.423 sys 0m0.106s 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:18.423 ************************************ 00:24:18.423 END TEST dma 00:24:18.423 ************************************ 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.423 ************************************ 00:24:18.423 START TEST nvmf_identify 00:24:18.423 ************************************ 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:18.423 * Looking for test storage... 00:24:18.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:24:18.423 16:49:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:18.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.423 --rc genhtml_branch_coverage=1 00:24:18.423 --rc genhtml_function_coverage=1 00:24:18.423 --rc genhtml_legend=1 00:24:18.423 --rc geninfo_all_blocks=1 00:24:18.423 --rc geninfo_unexecuted_blocks=1 00:24:18.423 00:24:18.423 ' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:18.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.423 --rc genhtml_branch_coverage=1 00:24:18.423 --rc genhtml_function_coverage=1 00:24:18.423 --rc genhtml_legend=1 00:24:18.423 --rc geninfo_all_blocks=1 00:24:18.423 --rc geninfo_unexecuted_blocks=1 00:24:18.423 00:24:18.423 ' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:18.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.423 --rc genhtml_branch_coverage=1 00:24:18.423 --rc genhtml_function_coverage=1 00:24:18.423 --rc genhtml_legend=1 00:24:18.423 --rc geninfo_all_blocks=1 00:24:18.423 --rc geninfo_unexecuted_blocks=1 00:24:18.423 00:24:18.423 ' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:18.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.423 --rc genhtml_branch_coverage=1 00:24:18.423 --rc genhtml_function_coverage=1 00:24:18.423 --rc genhtml_legend=1 00:24:18.423 --rc geninfo_all_blocks=1 00:24:18.423 --rc geninfo_unexecuted_blocks=1 00:24:18.423 00:24:18.423 ' 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.423 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.685 16:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:26.895 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:26.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:26.895 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.895 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:26.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:26.896 00:24:26.896 --- 10.0.0.2 ping statistics --- 00:24:26.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.896 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:24:26.896 00:24:26.896 --- 10.0.0.1 ping statistics --- 00:24:26.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.896 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2777365 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2777365 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2777365 ']' 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.896 16:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:26.896 [2024-10-01 16:49:17.570833] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:26.896 [2024-10-01 16:49:17.570896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.896 [2024-10-01 16:49:17.658377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.896 [2024-10-01 16:49:17.753754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.896 [2024-10-01 16:49:17.753812] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.896 [2024-10-01 16:49:17.753820] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.896 [2024-10-01 16:49:17.753827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.896 [2024-10-01 16:49:17.753833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.896 [2024-10-01 16:49:17.753983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.896 [2024-10-01 16:49:17.754032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.896 [2024-10-01 16:49:17.754173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.896 [2024-10-01 16:49:17.754176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.896 [2024-10-01 16:49:18.474989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.896 Malloc0 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.896 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.186 [2024-10-01 16:49:18.570847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.186 [ 00:24:27.186 { 00:24:27.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.186 "subtype": "Discovery", 00:24:27.186 "listen_addresses": [ 00:24:27.186 { 00:24:27.186 "trtype": "TCP", 00:24:27.186 "adrfam": "IPv4", 00:24:27.186 "traddr": "10.0.0.2", 00:24:27.186 "trsvcid": "4420" 00:24:27.186 } 00:24:27.186 ], 00:24:27.186 "allow_any_host": true, 00:24:27.186 "hosts": [] 00:24:27.186 }, 00:24:27.186 { 00:24:27.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.186 "subtype": "NVMe", 00:24:27.186 "listen_addresses": [ 00:24:27.186 { 00:24:27.186 "trtype": "TCP", 00:24:27.186 "adrfam": "IPv4", 00:24:27.186 "traddr": "10.0.0.2", 00:24:27.186 "trsvcid": "4420" 00:24:27.186 } 00:24:27.186 ], 00:24:27.186 "allow_any_host": true, 00:24:27.186 "hosts": [], 00:24:27.186 "serial_number": "SPDK00000000000001", 00:24:27.186 "model_number": "SPDK bdev Controller", 00:24:27.186 "max_namespaces": 32, 00:24:27.186 "min_cntlid": 1, 00:24:27.186 "max_cntlid": 65519, 00:24:27.186 "namespaces": [ 00:24:27.186 { 00:24:27.186 "nsid": 1, 00:24:27.186 "bdev_name": "Malloc0", 00:24:27.186 "name": "Malloc0", 00:24:27.186 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:27.186 "eui64": "ABCDEF0123456789", 00:24:27.186 "uuid": "a9925676-b53a-4765-8f58-ba8e9478026d" 00:24:27.186 } 00:24:27.186 ] 00:24:27.186 } 00:24:27.186 ] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.186 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:27.186 [2024-10-01 16:49:18.632412] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:27.186 [2024-10-01 16:49:18.632458] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777665 ] 00:24:27.186 [2024-10-01 16:49:18.662784] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:27.186 [2024-10-01 16:49:18.662832] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.186 [2024-10-01 16:49:18.662837] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.186 [2024-10-01 16:49:18.662848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.186 [2024-10-01 16:49:18.662856] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.186 [2024-10-01 16:49:18.668256] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:27.186 [2024-10-01 16:49:18.668293] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2047760 0 00:24:27.186 [2024-10-01 16:49:18.675986] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.186 [2024-10-01 16:49:18.675998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.186 [2024-10-01 16:49:18.676003] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.186 [2024-10-01 16:49:18.676006] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.186 [2024-10-01 16:49:18.676035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.676041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.676045] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.186 [2024-10-01 16:49:18.676058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.186 [2024-10-01 16:49:18.676074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.186 [2024-10-01 16:49:18.683980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.186 [2024-10-01 16:49:18.683988] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.186 [2024-10-01 16:49:18.683992] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.683996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.186 [2024-10-01 16:49:18.684008] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.186 [2024-10-01 16:49:18.684015] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:27.186 [2024-10-01 16:49:18.684020] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:27.186 [2024-10-01 16:49:18.684034] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.684038] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.684041] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.186 [2024-10-01 16:49:18.684049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.186 [2024-10-01 16:49:18.684061] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.186 [2024-10-01 16:49:18.684226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.186 [2024-10-01 16:49:18.684232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.186 [2024-10-01 16:49:18.684236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.684239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.186 [2024-10-01 16:49:18.684244] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:27.186 [2024-10-01 16:49:18.684251] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:27.186 [2024-10-01 16:49:18.684258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.684261] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.186 [2024-10-01 16:49:18.684265] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.186 [2024-10-01 16:49:18.684271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.186 [2024-10-01 16:49:18.684281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.684467] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.684473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.684476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.684485] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:27.187 [2024-10-01 16:49:18.684493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.684499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.684512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.187 [2024-10-01 16:49:18.684522] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.684683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.684689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.684692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.684701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.684709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.684723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.187 [2024-10-01 16:49:18.684732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.684924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.684929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.684935] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.684938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.684943] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:27.187 [2024-10-01 16:49:18.684948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.684954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.685060] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:27.187 [2024-10-01 16:49:18.685065] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.685073] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685080] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.685087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.187 [2024-10-01 16:49:18.685097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.685279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.685284] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.685288] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685291] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.685296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.187 [2024-10-01 16:49:18.685304] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685308] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.685318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.187 [2024-10-01 16:49:18.685327] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.685503] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.685509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.685512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.685520] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.187 [2024-10-01 16:49:18.685524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:27.187 [2024-10-01 16:49:18.685531] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:27.187 [2024-10-01 16:49:18.685539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.187 [2024-10-01 16:49:18.685548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.685559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.187 [2024-10-01 16:49:18.685570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.685786] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.187 [2024-10-01 16:49:18.685793] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.187 [2024-10-01 16:49:18.685796] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685800] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2047760): datao=0, datal=4096, cccid=0 00:24:27.187 [2024-10-01 16:49:18.685805] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a7480) on tqpair(0x2047760): expected_datao=0, payload_size=4096 00:24:27.187 [2024-10-01 16:49:18.685809] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685816] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685820] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.685962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.685966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.685974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.685981] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:27.187 [2024-10-01 16:49:18.685986] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:27.187 [2024-10-01 16:49:18.685990] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:27.187 [2024-10-01 16:49:18.685995] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:27.187 [2024-10-01 16:49:18.685999] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:27.187 [2024-10-01 16:49:18.686004] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:27.187 [2024-10-01 16:49:18.686012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.187 [2024-10-01 16:49:18.686018] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686022] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.686032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.187 [2024-10-01 16:49:18.686042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.187 [2024-10-01 16:49:18.686185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.187 [2024-10-01 16:49:18.686191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.187 [2024-10-01 16:49:18.686194] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686198] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.187 [2024-10-01 16:49:18.686205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686209] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686212] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.686218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.187 [2024-10-01 16:49:18.686226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.187 [2024-10-01 16:49:18.686233] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2047760) 00:24:27.187 [2024-10-01 16:49:18.686239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.188 [2024-10-01 16:49:18.686244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686251] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.686257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.188 [2024-10-01 16:49:18.686263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.686275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.188 [2024-10-01 16:49:18.686280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.188 [2024-10-01 16:49:18.686290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.188 [2024-10-01 16:49:18.686296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.686306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.188 [2024-10-01 16:49:18.686317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7480, cid 0, qid 0 00:24:27.188 [2024-10-01 16:49:18.686322] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7600, cid 1, qid 0 00:24:27.188 [2024-10-01 16:49:18.686326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7780, cid 2, qid 0 00:24:27.188 [2024-10-01 16:49:18.686331] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.188 [2024-10-01 16:49:18.686335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7a80, cid 4, qid 0 00:24:27.188 [2024-10-01 16:49:18.686549] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.188 [2024-10-01 16:49:18.686555] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.188 [2024-10-01 16:49:18.686558] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7a80) on tqpair=0x2047760 00:24:27.188 [2024-10-01 16:49:18.686567] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:27.188 [2024-10-01 16:49:18.686572] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:27.188 [2024-10-01 16:49:18.686582] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.686591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.188 [2024-10-01 16:49:18.686601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7a80, cid 4, qid 0 00:24:27.188 [2024-10-01 16:49:18.686784] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.188 [2024-10-01 16:49:18.686791] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.188 [2024-10-01 16:49:18.686794] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2047760): datao=0, datal=4096, cccid=4 00:24:27.188 [2024-10-01 16:49:18.686802] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a7a80) on tqpair(0x2047760): expected_datao=0, payload_size=4096 00:24:27.188 [2024-10-01 16:49:18.686806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686815] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.686819] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.188 [2024-10-01 16:49:18.727144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.188 [2024-10-01 16:49:18.727148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7a80) on tqpair=0x2047760 00:24:27.188 [2024-10-01 16:49:18.727163] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:27.188 [2024-10-01 16:49:18.727190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727194] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.727201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.188 [2024-10-01 16:49:18.727208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.727221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.188 [2024-10-01 16:49:18.727233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7a80, cid 4, qid 0 00:24:27.188 [2024-10-01 16:49:18.727238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7c00, cid 5, qid 0 00:24:27.188 [2024-10-01 16:49:18.727415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.188 [2024-10-01 16:49:18.727420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.188 [2024-10-01 16:49:18.727424] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727427] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2047760): datao=0, datal=1024, cccid=4 00:24:27.188 [2024-10-01 16:49:18.727431] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a7a80) on tqpair(0x2047760): expected_datao=0, payload_size=1024 00:24:27.188 [2024-10-01 16:49:18.727435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727441] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727445] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.188 [2024-10-01 16:49:18.727455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.188 [2024-10-01 16:49:18.727458] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.727462] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7c00) on tqpair=0x2047760 00:24:27.188 [2024-10-01 16:49:18.770979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.188 [2024-10-01 16:49:18.770991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.188 [2024-10-01 16:49:18.770998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7a80) on tqpair=0x2047760 00:24:27.188 [2024-10-01 16:49:18.771016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.771026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.188 [2024-10-01 16:49:18.771043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7a80, cid 4, qid 0 00:24:27.188 [2024-10-01 16:49:18.771255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.188 [2024-10-01 16:49:18.771262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.188 [2024-10-01 16:49:18.771265] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771268] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2047760): datao=0, datal=3072, cccid=4 00:24:27.188 [2024-10-01 16:49:18.771273] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a7a80) on tqpair(0x2047760): expected_datao=0, payload_size=3072 00:24:27.188 [2024-10-01 16:49:18.771277] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771283] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771287] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.188 [2024-10-01 16:49:18.771431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.188 [2024-10-01 16:49:18.771434] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7a80) on tqpair=0x2047760 00:24:27.188 [2024-10-01 16:49:18.771445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2047760) 00:24:27.188 [2024-10-01 16:49:18.771455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.188 [2024-10-01 16:49:18.771468] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7a80, cid 4, qid 0 00:24:27.188 [2024-10-01 16:49:18.771643] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.188 [2024-10-01 16:49:18.771649] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.188 [2024-10-01 16:49:18.771652] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.188 [2024-10-01 16:49:18.771655] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2047760): datao=0, datal=8, cccid=4 00:24:27.188 [2024-10-01 16:49:18.771660] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a7a80) on tqpair(0x2047760): expected_datao=0, payload_size=8 00:24:27.188 [2024-10-01 16:49:18.771664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.189 [2024-10-01 16:49:18.771670] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.189 [2024-10-01 16:49:18.771673] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.189 [2024-10-01 16:49:18.812133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.189 [2024-10-01 16:49:18.812141] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.189 [2024-10-01 16:49:18.812145] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.189 [2024-10-01 16:49:18.812148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7a80) on tqpair=0x2047760 00:24:27.189 ===================================================== 00:24:27.189 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.189 ===================================================== 00:24:27.189 Controller Capabilities/Features 00:24:27.189 ================================ 00:24:27.189 Vendor ID: 0000 00:24:27.189 Subsystem Vendor ID: 0000 00:24:27.189 Serial Number: .................... 00:24:27.189 Model Number: ........................................ 00:24:27.189 Firmware Version: 25.01 00:24:27.189 Recommended Arb Burst: 0 00:24:27.189 IEEE OUI Identifier: 00 00 00 00:24:27.189 Multi-path I/O 00:24:27.189 May have multiple subsystem ports: No 00:24:27.189 May have multiple controllers: No 00:24:27.189 Associated with SR-IOV VF: No 00:24:27.189 Max Data Transfer Size: 131072 00:24:27.189 Max Number of Namespaces: 0 00:24:27.189 Max Number of I/O Queues: 1024 00:24:27.189 NVMe Specification Version (VS): 1.3 00:24:27.189 NVMe Specification Version (Identify): 1.3 00:24:27.189 Maximum Queue Entries: 128 00:24:27.189 Contiguous Queues Required: Yes 00:24:27.189 Arbitration Mechanisms Supported 00:24:27.189 Weighted Round Robin: Not Supported 00:24:27.189 Vendor Specific: Not Supported 00:24:27.189 Reset Timeout: 15000 ms 00:24:27.189 Doorbell Stride: 4 bytes 00:24:27.189 NVM Subsystem Reset: Not Supported 00:24:27.189 Command Sets Supported 00:24:27.189 NVM Command Set: Supported 00:24:27.189 Boot Partition: Not Supported 00:24:27.189 Memory Page Size Minimum: 4096 bytes 00:24:27.189 Memory Page Size Maximum: 4096 bytes 00:24:27.189 Persistent Memory Region: Not Supported 00:24:27.189 Optional Asynchronous Events Supported 00:24:27.189 Namespace Attribute Notices: Not Supported 00:24:27.189 Firmware Activation Notices: Not Supported 00:24:27.189 ANA Change Notices: Not Supported 00:24:27.189 PLE Aggregate Log Change Notices: Not Supported 00:24:27.189 LBA Status Info Alert Notices: Not Supported 00:24:27.189 EGE Aggregate Log Change Notices: Not Supported 00:24:27.189 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.189 Zone Descriptor Change Notices: Not Supported 00:24:27.189 Discovery Log Change Notices: Supported 00:24:27.189 Controller Attributes 00:24:27.189 128-bit Host Identifier: Not Supported 00:24:27.189 Non-Operational Permissive Mode: Not Supported 00:24:27.189 NVM Sets: Not Supported 00:24:27.189 Read Recovery Levels: Not Supported 00:24:27.189 Endurance Groups: Not Supported 00:24:27.189 Predictable Latency Mode: Not Supported 00:24:27.189 Traffic Based Keep ALive: Not Supported 00:24:27.189 Namespace Granularity: Not Supported 00:24:27.189 SQ Associations: Not Supported 00:24:27.189 UUID List: Not Supported 00:24:27.189 Multi-Domain Subsystem: Not Supported 00:24:27.189 Fixed Capacity Management: Not Supported 00:24:27.189 Variable Capacity Management: Not Supported 00:24:27.189 Delete Endurance Group: Not Supported 00:24:27.189 Delete NVM Set: Not Supported 00:24:27.189 Extended LBA Formats Supported: Not Supported 00:24:27.189 Flexible Data Placement Supported: Not Supported 00:24:27.189 00:24:27.189 Controller Memory Buffer Support 00:24:27.189 ================================ 00:24:27.189 Supported: No 00:24:27.189 00:24:27.189 Persistent Memory Region Support 00:24:27.189 ================================ 00:24:27.189 Supported: No 00:24:27.189 00:24:27.189 Admin Command Set Attributes 00:24:27.189 ============================ 00:24:27.189 Security Send/Receive: Not Supported 00:24:27.189 Format NVM: Not Supported 00:24:27.189 Firmware Activate/Download: Not Supported 00:24:27.189 Namespace Management: Not Supported 00:24:27.189 Device Self-Test: Not Supported 00:24:27.189 Directives: Not Supported 00:24:27.189 NVMe-MI: Not Supported 00:24:27.189 Virtualization Management: Not Supported 00:24:27.189 Doorbell Buffer Config: Not Supported 00:24:27.189 Get LBA Status Capability: Not Supported 00:24:27.189 Command & Feature Lockdown Capability: Not Supported 00:24:27.189 Abort Command Limit: 1 00:24:27.189 Async Event Request Limit: 4 00:24:27.189 Number of Firmware Slots: N/A 00:24:27.189 Firmware Slot 1 Read-Only: N/A 00:24:27.189 Firmware Activation Without Reset: N/A 00:24:27.189 Multiple Update Detection Support: N/A 00:24:27.189 Firmware Update Granularity: No Information Provided 00:24:27.189 Per-Namespace SMART Log: No 00:24:27.189 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.189 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.189 Command Effects Log Page: Not Supported 00:24:27.189 Get Log Page Extended Data: Supported 00:24:27.189 Telemetry Log Pages: Not Supported 00:24:27.189 Persistent Event Log Pages: Not Supported 00:24:27.189 Supported Log Pages Log Page: May Support 00:24:27.189 Commands Supported & Effects Log Page: Not Supported 00:24:27.189 Feature Identifiers & Effects Log Page:May Support 00:24:27.189 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.189 Data Area 4 for Telemetry Log: Not Supported 00:24:27.189 Error Log Page Entries Supported: 128 00:24:27.189 Keep Alive: Not Supported 00:24:27.189 00:24:27.189 NVM Command Set Attributes 00:24:27.189 ========================== 00:24:27.189 Submission Queue Entry Size 00:24:27.189 Max: 1 00:24:27.189 Min: 1 00:24:27.189 Completion Queue Entry Size 00:24:27.189 Max: 1 00:24:27.189 Min: 1 00:24:27.189 Number of Namespaces: 0 00:24:27.189 Compare Command: Not Supported 00:24:27.189 Write Uncorrectable Command: Not Supported 00:24:27.189 Dataset Management Command: Not Supported 00:24:27.189 Write Zeroes Command: Not Supported 00:24:27.189 Set Features Save Field: Not Supported 00:24:27.189 Reservations: Not Supported 00:24:27.189 Timestamp: Not Supported 00:24:27.189 Copy: Not Supported 00:24:27.189 Volatile Write Cache: Not Present 00:24:27.189 Atomic Write Unit (Normal): 1 00:24:27.189 Atomic Write Unit (PFail): 1 00:24:27.189 Atomic Compare & Write Unit: 1 00:24:27.189 Fused Compare & Write: Supported 00:24:27.189 Scatter-Gather List 00:24:27.189 SGL Command Set: Supported 00:24:27.189 SGL Keyed: Supported 00:24:27.189 SGL Bit Bucket Descriptor: Not Supported 00:24:27.189 SGL Metadata Pointer: Not Supported 00:24:27.189 Oversized SGL: Not Supported 00:24:27.189 SGL Metadata Address: Not Supported 00:24:27.189 SGL Offset: Supported 00:24:27.189 Transport SGL Data Block: Not Supported 00:24:27.190 Replay Protected Memory Block: Not Supported 00:24:27.190 00:24:27.190 Firmware Slot Information 00:24:27.190 ========================= 00:24:27.190 Active slot: 0 00:24:27.190 00:24:27.190 00:24:27.190 Error Log 00:24:27.190 ========= 00:24:27.190 00:24:27.190 Active Namespaces 00:24:27.190 ================= 00:24:27.190 Discovery Log Page 00:24:27.190 ================== 00:24:27.190 Generation Counter: 2 00:24:27.190 Number of Records: 2 00:24:27.190 Record Format: 0 00:24:27.190 00:24:27.190 Discovery Log Entry 0 00:24:27.190 ---------------------- 00:24:27.190 Transport Type: 3 (TCP) 00:24:27.190 Address Family: 1 (IPv4) 00:24:27.190 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.190 Entry Flags: 00:24:27.190 Duplicate Returned Information: 1 00:24:27.190 Explicit Persistent Connection Support for Discovery: 1 00:24:27.190 Transport Requirements: 00:24:27.190 Secure Channel: Not Required 00:24:27.190 Port ID: 0 (0x0000) 00:24:27.190 Controller ID: 65535 (0xffff) 00:24:27.190 Admin Max SQ Size: 128 00:24:27.190 Transport Service Identifier: 4420 00:24:27.190 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.190 Transport Address: 10.0.0.2 00:24:27.190 Discovery Log Entry 1 00:24:27.190 ---------------------- 00:24:27.190 Transport Type: 3 (TCP) 00:24:27.190 Address Family: 1 (IPv4) 00:24:27.190 Subsystem Type: 2 (NVM Subsystem) 00:24:27.190 Entry Flags: 00:24:27.190 Duplicate Returned Information: 0 00:24:27.190 Explicit Persistent Connection Support for Discovery: 0 00:24:27.190 Transport Requirements: 00:24:27.190 Secure Channel: Not Required 00:24:27.190 Port ID: 0 (0x0000) 00:24:27.190 Controller ID: 65535 (0xffff) 00:24:27.190 Admin Max SQ Size: 128 00:24:27.190 Transport Service Identifier: 4420 00:24:27.190 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:27.190 Transport Address: 10.0.0.2 [2024-10-01 16:49:18.812224] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:27.190 [2024-10-01 16:49:18.812234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7480) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.190 [2024-10-01 16:49:18.812247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7600) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.190 [2024-10-01 16:49:18.812256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7780) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.190 [2024-10-01 16:49:18.812265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.190 [2024-10-01 16:49:18.812278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812282] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.812292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.812305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.812449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.812455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.812459] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812462] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812469] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.812482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.812494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.812733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.812739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.812743] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812751] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:27.190 [2024-10-01 16:49:18.812758] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:27.190 [2024-10-01 16:49:18.812766] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812770] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.812780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.812789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.812934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.812940] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.812943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.812958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.812965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.812978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.812988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.813162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.813168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.813171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.813184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.813197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.813206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.813410] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.813416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.813419] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813423] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.813432] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813435] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.190 [2024-10-01 16:49:18.813445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.190 [2024-10-01 16:49:18.813454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.190 [2024-10-01 16:49:18.813630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.190 [2024-10-01 16:49:18.813636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.190 [2024-10-01 16:49:18.813639] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813642] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.190 [2024-10-01 16:49:18.813651] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813655] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.190 [2024-10-01 16:49:18.813658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.813664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.813674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.813849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.813855] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.813858] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.813862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.813873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.813877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.813880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.813886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.813895] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.814118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.814124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.814127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814131] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.814140] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.814153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.814162] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.814322] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.814327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.814330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.814343] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.814356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.814366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.814553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.814559] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.814562] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814566] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.814575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.814588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.814597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.814799] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.814804] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.814808] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814811] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.814824] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814828] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.814831] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2047760) 00:24:27.191 [2024-10-01 16:49:18.814838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.191 [2024-10-01 16:49:18.814847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a7900, cid 3, qid 0 00:24:27.191 [2024-10-01 16:49:18.818978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.191 [2024-10-01 16:49:18.818986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.191 [2024-10-01 16:49:18.818989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.191 [2024-10-01 16:49:18.818993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20a7900) on tqpair=0x2047760 00:24:27.191 [2024-10-01 16:49:18.819000] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:27.191 00:24:27.191 16:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:27.191 [2024-10-01 16:49:18.857661] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:27.191 [2024-10-01 16:49:18.857707] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777673 ] 00:24:27.456 [2024-10-01 16:49:18.888633] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:27.456 [2024-10-01 16:49:18.888677] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.456 [2024-10-01 16:49:18.888681] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.456 [2024-10-01 16:49:18.888691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.456 [2024-10-01 16:49:18.888699] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.456 [2024-10-01 16:49:18.892152] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:27.456 [2024-10-01 16:49:18.892181] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x181d760 0 00:24:27.456 [2024-10-01 16:49:18.899980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.456 [2024-10-01 16:49:18.899991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.456 [2024-10-01 16:49:18.899995] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.456 [2024-10-01 16:49:18.899998] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.456 [2024-10-01 16:49:18.900021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.456 [2024-10-01 16:49:18.900026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.456 [2024-10-01 16:49:18.900030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.456 [2024-10-01 16:49:18.900041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.456 [2024-10-01 16:49:18.900057] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.456 [2024-10-01 16:49:18.907980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.456 [2024-10-01 16:49:18.907988] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.456 [2024-10-01 16:49:18.907995] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.456 [2024-10-01 16:49:18.908000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.456 [2024-10-01 16:49:18.908011] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.456 [2024-10-01 16:49:18.908017] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:27.456 [2024-10-01 16:49:18.908022] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:27.456 [2024-10-01 16:49:18.908033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.456 [2024-10-01 16:49:18.908037] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.456 [2024-10-01 16:49:18.908040] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.456 [2024-10-01 16:49:18.908048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.456 [2024-10-01 16:49:18.908060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.456 [2024-10-01 16:49:18.908256] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.908262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.908265] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.908274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:27.457 [2024-10-01 16:49:18.908281] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:27.457 [2024-10-01 16:49:18.908287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908294] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.908301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.908310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.908498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.908504] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.908508] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.908516] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:27.457 [2024-10-01 16:49:18.908523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.908529] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908533] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908536] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.908543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.908552] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.908680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.908686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.908690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.908702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.908710] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908717] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.908724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.908733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.908911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.908916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.908920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.908923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.908927] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:27.457 [2024-10-01 16:49:18.908932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.908939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.909044] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:27.457 [2024-10-01 16:49:18.909048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.909055] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.909068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.909078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.909163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.909169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.909172] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909176] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.909180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.457 [2024-10-01 16:49:18.909189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.909202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.909211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.909385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.909391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.909394] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.909404] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.457 [2024-10-01 16:49:18.909408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.909416] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:27.457 [2024-10-01 16:49:18.909426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.909434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.909444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.909454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.909592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.457 [2024-10-01 16:49:18.909598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.457 [2024-10-01 16:49:18.909601] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909605] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=4096, cccid=0 00:24:27.457 [2024-10-01 16:49:18.909609] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187d480) on tqpair(0x181d760): expected_datao=0, payload_size=4096 00:24:27.457 [2024-10-01 16:49:18.909613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909620] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.909624] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.950976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.950986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.950989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.950993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.951000] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:27.457 [2024-10-01 16:49:18.951004] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:27.457 [2024-10-01 16:49:18.951008] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:27.457 [2024-10-01 16:49:18.951012] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:27.457 [2024-10-01 16:49:18.951017] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:27.457 [2024-10-01 16:49:18.951021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.951029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.951035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951039] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951043] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.457 [2024-10-01 16:49:18.951063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.951243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.457 [2024-10-01 16:49:18.951249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.457 [2024-10-01 16:49:18.951252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.457 [2024-10-01 16:49:18.951262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.457 [2024-10-01 16:49:18.951281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.457 [2024-10-01 16:49:18.951299] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951303] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.457 [2024-10-01 16:49:18.951317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951320] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951324] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.457 [2024-10-01 16:49:18.951334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.951343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.457 [2024-10-01 16:49:18.951350] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.457 [2024-10-01 16:49:18.951353] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.457 [2024-10-01 16:49:18.951360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.457 [2024-10-01 16:49:18.951371] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d480, cid 0, qid 0 00:24:27.457 [2024-10-01 16:49:18.951376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d600, cid 1, qid 0 00:24:27.457 [2024-10-01 16:49:18.951381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d780, cid 2, qid 0 00:24:27.457 [2024-10-01 16:49:18.951385] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.458 [2024-10-01 16:49:18.951389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:18.951489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:18.951496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:18.951499] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:18.951509] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:27.458 [2024-10-01 16:49:18.951513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.951521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.951529] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.951535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:18.951548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.458 [2024-10-01 16:49:18.951557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:18.951672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:18.951678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:18.951681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:18.951744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.951753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.951760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951763] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:18.951769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:18.951779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:18.951890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.458 [2024-10-01 16:49:18.951896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.458 [2024-10-01 16:49:18.951900] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951903] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=4096, cccid=4 00:24:27.458 [2024-10-01 16:49:18.951907] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187da80) on tqpair(0x181d760): expected_datao=0, payload_size=4096 00:24:27.458 [2024-10-01 16:49:18.951911] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951923] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.951927] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:18.992142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:18.992145] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:18.992160] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:27.458 [2024-10-01 16:49:18.992173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.992182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:18.992189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:18.992199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:18.992210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:18.992409] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.458 [2024-10-01 16:49:18.992415] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.458 [2024-10-01 16:49:18.992418] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992421] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=4096, cccid=4 00:24:27.458 [2024-10-01 16:49:18.992425] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187da80) on tqpair(0x181d760): expected_datao=0, payload_size=4096 00:24:27.458 [2024-10-01 16:49:18.992430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992441] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:18.992445] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.036978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.036986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.036989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.036993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.037005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:19.037031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:19.037042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:19.037193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.458 [2024-10-01 16:49:19.037199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.458 [2024-10-01 16:49:19.037202] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037205] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=4096, cccid=4 00:24:27.458 [2024-10-01 16:49:19.037209] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187da80) on tqpair(0x181d760): expected_datao=0, payload_size=4096 00:24:27.458 [2024-10-01 16:49:19.037213] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037227] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037231] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.037396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.037401] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.037412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037419] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037427] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037447] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:27.458 [2024-10-01 16:49:19.037452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:27.458 [2024-10-01 16:49:19.037457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:27.458 [2024-10-01 16:49:19.037469] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037473] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:19.037479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:19.037485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:19.037498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.458 [2024-10-01 16:49:19.037508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.458 [2024-10-01 16:49:19.037513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dc00, cid 5, qid 0 00:24:27.458 [2024-10-01 16:49:19.037706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.037712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.037715] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.037725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.037730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.037733] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037737] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dc00) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.037745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:19.037754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:19.037764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dc00, cid 5, qid 0 00:24:27.458 [2024-10-01 16:49:19.037897] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.037903] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.037906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dc00) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.037918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.037921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x181d760) 00:24:27.458 [2024-10-01 16:49:19.037927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.458 [2024-10-01 16:49:19.037936] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dc00, cid 5, qid 0 00:24:27.458 [2024-10-01 16:49:19.038102] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.458 [2024-10-01 16:49:19.038108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.458 [2024-10-01 16:49:19.038111] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.458 [2024-10-01 16:49:19.038115] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dc00) on tqpair=0x181d760 00:24:27.458 [2024-10-01 16:49:19.038123] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038127] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x181d760) 00:24:27.459 [2024-10-01 16:49:19.038133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.459 [2024-10-01 16:49:19.038142] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dc00, cid 5, qid 0 00:24:27.459 [2024-10-01 16:49:19.038346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.459 [2024-10-01 16:49:19.038352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.459 [2024-10-01 16:49:19.038355] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dc00) on tqpair=0x181d760 00:24:27.459 [2024-10-01 16:49:19.038371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038375] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x181d760) 00:24:27.459 [2024-10-01 16:49:19.038381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.459 [2024-10-01 16:49:19.038388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x181d760) 00:24:27.459 [2024-10-01 16:49:19.038398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.459 [2024-10-01 16:49:19.038404] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x181d760) 00:24:27.459 [2024-10-01 16:49:19.038413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.459 [2024-10-01 16:49:19.038422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x181d760) 00:24:27.459 [2024-10-01 16:49:19.038431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.459 [2024-10-01 16:49:19.038442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dc00, cid 5, qid 0 00:24:27.459 [2024-10-01 16:49:19.038448] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187da80, cid 4, qid 0 00:24:27.459 [2024-10-01 16:49:19.038453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187dd80, cid 6, qid 0 00:24:27.459 [2024-10-01 16:49:19.038457] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187df00, cid 7, qid 0 00:24:27.459 [2024-10-01 16:49:19.038629] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.459 [2024-10-01 16:49:19.038635] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.459 [2024-10-01 16:49:19.038638] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038641] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=8192, cccid=5 00:24:27.459 [2024-10-01 16:49:19.038646] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187dc00) on tqpair(0x181d760): expected_datao=0, payload_size=8192 00:24:27.459 [2024-10-01 16:49:19.038650] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038742] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038746] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.459 [2024-10-01 16:49:19.038757] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.459 [2024-10-01 16:49:19.038760] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038763] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=512, cccid=4 00:24:27.459 [2024-10-01 16:49:19.038767] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187da80) on tqpair(0x181d760): expected_datao=0, payload_size=512 00:24:27.459 [2024-10-01 16:49:19.038771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038777] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038781] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038786] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.459 [2024-10-01 16:49:19.038791] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.459 [2024-10-01 16:49:19.038795] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=512, cccid=6 00:24:27.459 [2024-10-01 16:49:19.038802] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187dd80) on tqpair(0x181d760): expected_datao=0, payload_size=512 00:24:27.459 [2024-10-01 16:49:19.038806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038812] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038815] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.459 [2024-10-01 16:49:19.038826] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.459 [2024-10-01 16:49:19.038829] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038832] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x181d760): datao=0, datal=4096, cccid=7 00:24:27.459 [2024-10-01 16:49:19.038837] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187df00) on tqpair(0x181d760): expected_datao=0, payload_size=4096 00:24:27.459 [2024-10-01 16:49:19.038841] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038847] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038850] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.459 [2024-10-01 16:49:19.038862] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.459 [2024-10-01 16:49:19.038865] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dc00) on tqpair=0x181d760 00:24:27.459 [2024-10-01 16:49:19.038881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.459 [2024-10-01 16:49:19.038887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.459 [2024-10-01 16:49:19.038890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187da80) on tqpair=0x181d760 00:24:27.459 [2024-10-01 16:49:19.038903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.459 [2024-10-01 16:49:19.038909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.459 [2024-10-01 16:49:19.038912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187dd80) on tqpair=0x181d760 00:24:27.459 [2024-10-01 16:49:19.038922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.459 [2024-10-01 16:49:19.038927] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.459 [2024-10-01 16:49:19.038931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.459 [2024-10-01 16:49:19.038934] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187df00) on tqpair=0x181d760 00:24:27.459 ===================================================== 00:24:27.459 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.459 ===================================================== 00:24:27.459 Controller Capabilities/Features 00:24:27.459 ================================ 00:24:27.459 Vendor ID: 8086 00:24:27.459 Subsystem Vendor ID: 8086 00:24:27.459 Serial Number: SPDK00000000000001 00:24:27.459 Model Number: SPDK bdev Controller 00:24:27.459 Firmware Version: 25.01 00:24:27.459 Recommended Arb Burst: 6 00:24:27.459 IEEE OUI Identifier: e4 d2 5c 00:24:27.459 Multi-path I/O 00:24:27.459 May have multiple subsystem ports: Yes 00:24:27.459 May have multiple controllers: Yes 00:24:27.459 Associated with SR-IOV VF: No 00:24:27.459 Max Data Transfer Size: 131072 00:24:27.459 Max Number of Namespaces: 32 00:24:27.459 Max Number of I/O Queues: 127 00:24:27.459 NVMe Specification Version (VS): 1.3 00:24:27.459 NVMe Specification Version (Identify): 1.3 00:24:27.459 Maximum Queue Entries: 128 00:24:27.459 Contiguous Queues Required: Yes 00:24:27.459 Arbitration Mechanisms Supported 00:24:27.459 Weighted Round Robin: Not Supported 00:24:27.459 Vendor Specific: Not Supported 00:24:27.459 Reset Timeout: 15000 ms 00:24:27.459 Doorbell Stride: 4 bytes 00:24:27.459 NVM Subsystem Reset: Not Supported 00:24:27.459 Command Sets Supported 00:24:27.459 NVM Command Set: Supported 00:24:27.459 Boot Partition: Not Supported 00:24:27.459 Memory Page Size Minimum: 4096 bytes 00:24:27.459 Memory Page Size Maximum: 4096 bytes 00:24:27.459 Persistent Memory Region: Not Supported 00:24:27.459 Optional Asynchronous Events Supported 00:24:27.459 Namespace Attribute Notices: Supported 00:24:27.459 Firmware Activation Notices: Not Supported 00:24:27.459 ANA Change Notices: Not Supported 00:24:27.459 PLE Aggregate Log Change Notices: Not Supported 00:24:27.459 LBA Status Info Alert Notices: Not Supported 00:24:27.459 EGE Aggregate Log Change Notices: Not Supported 00:24:27.459 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.459 Zone Descriptor Change Notices: Not Supported 00:24:27.459 Discovery Log Change Notices: Not Supported 00:24:27.459 Controller Attributes 00:24:27.459 128-bit Host Identifier: Supported 00:24:27.459 Non-Operational Permissive Mode: Not Supported 00:24:27.459 NVM Sets: Not Supported 00:24:27.459 Read Recovery Levels: Not Supported 00:24:27.459 Endurance Groups: Not Supported 00:24:27.459 Predictable Latency Mode: Not Supported 00:24:27.459 Traffic Based Keep ALive: Not Supported 00:24:27.459 Namespace Granularity: Not Supported 00:24:27.459 SQ Associations: Not Supported 00:24:27.459 UUID List: Not Supported 00:24:27.459 Multi-Domain Subsystem: Not Supported 00:24:27.459 Fixed Capacity Management: Not Supported 00:24:27.459 Variable Capacity Management: Not Supported 00:24:27.459 Delete Endurance Group: Not Supported 00:24:27.459 Delete NVM Set: Not Supported 00:24:27.459 Extended LBA Formats Supported: Not Supported 00:24:27.459 Flexible Data Placement Supported: Not Supported 00:24:27.459 00:24:27.459 Controller Memory Buffer Support 00:24:27.459 ================================ 00:24:27.459 Supported: No 00:24:27.459 00:24:27.459 Persistent Memory Region Support 00:24:27.459 ================================ 00:24:27.459 Supported: No 00:24:27.459 00:24:27.459 Admin Command Set Attributes 00:24:27.459 ============================ 00:24:27.459 Security Send/Receive: Not Supported 00:24:27.459 Format NVM: Not Supported 00:24:27.459 Firmware Activate/Download: Not Supported 00:24:27.459 Namespace Management: Not Supported 00:24:27.459 Device Self-Test: Not Supported 00:24:27.459 Directives: Not Supported 00:24:27.459 NVMe-MI: Not Supported 00:24:27.459 Virtualization Management: Not Supported 00:24:27.459 Doorbell Buffer Config: Not Supported 00:24:27.459 Get LBA Status Capability: Not Supported 00:24:27.459 Command & Feature Lockdown Capability: Not Supported 00:24:27.459 Abort Command Limit: 4 00:24:27.459 Async Event Request Limit: 4 00:24:27.459 Number of Firmware Slots: N/A 00:24:27.459 Firmware Slot 1 Read-Only: N/A 00:24:27.459 Firmware Activation Without Reset: N/A 00:24:27.459 Multiple Update Detection Support: N/A 00:24:27.459 Firmware Update Granularity: No Information Provided 00:24:27.459 Per-Namespace SMART Log: No 00:24:27.459 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.459 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:27.459 Command Effects Log Page: Supported 00:24:27.459 Get Log Page Extended Data: Supported 00:24:27.460 Telemetry Log Pages: Not Supported 00:24:27.460 Persistent Event Log Pages: Not Supported 00:24:27.460 Supported Log Pages Log Page: May Support 00:24:27.460 Commands Supported & Effects Log Page: Not Supported 00:24:27.460 Feature Identifiers & Effects Log Page:May Support 00:24:27.460 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.460 Data Area 4 for Telemetry Log: Not Supported 00:24:27.460 Error Log Page Entries Supported: 128 00:24:27.460 Keep Alive: Supported 00:24:27.460 Keep Alive Granularity: 10000 ms 00:24:27.460 00:24:27.460 NVM Command Set Attributes 00:24:27.460 ========================== 00:24:27.460 Submission Queue Entry Size 00:24:27.460 Max: 64 00:24:27.460 Min: 64 00:24:27.460 Completion Queue Entry Size 00:24:27.460 Max: 16 00:24:27.460 Min: 16 00:24:27.460 Number of Namespaces: 32 00:24:27.460 Compare Command: Supported 00:24:27.460 Write Uncorrectable Command: Not Supported 00:24:27.460 Dataset Management Command: Supported 00:24:27.460 Write Zeroes Command: Supported 00:24:27.460 Set Features Save Field: Not Supported 00:24:27.460 Reservations: Supported 00:24:27.460 Timestamp: Not Supported 00:24:27.460 Copy: Supported 00:24:27.460 Volatile Write Cache: Present 00:24:27.460 Atomic Write Unit (Normal): 1 00:24:27.460 Atomic Write Unit (PFail): 1 00:24:27.460 Atomic Compare & Write Unit: 1 00:24:27.460 Fused Compare & Write: Supported 00:24:27.460 Scatter-Gather List 00:24:27.460 SGL Command Set: Supported 00:24:27.460 SGL Keyed: Supported 00:24:27.460 SGL Bit Bucket Descriptor: Not Supported 00:24:27.460 SGL Metadata Pointer: Not Supported 00:24:27.460 Oversized SGL: Not Supported 00:24:27.460 SGL Metadata Address: Not Supported 00:24:27.460 SGL Offset: Supported 00:24:27.460 Transport SGL Data Block: Not Supported 00:24:27.460 Replay Protected Memory Block: Not Supported 00:24:27.460 00:24:27.460 Firmware Slot Information 00:24:27.460 ========================= 00:24:27.460 Active slot: 1 00:24:27.460 Slot 1 Firmware Revision: 25.01 00:24:27.460 00:24:27.460 00:24:27.460 Commands Supported and Effects 00:24:27.460 ============================== 00:24:27.460 Admin Commands 00:24:27.460 -------------- 00:24:27.460 Get Log Page (02h): Supported 00:24:27.460 Identify (06h): Supported 00:24:27.460 Abort (08h): Supported 00:24:27.460 Set Features (09h): Supported 00:24:27.460 Get Features (0Ah): Supported 00:24:27.460 Asynchronous Event Request (0Ch): Supported 00:24:27.460 Keep Alive (18h): Supported 00:24:27.460 I/O Commands 00:24:27.460 ------------ 00:24:27.460 Flush (00h): Supported LBA-Change 00:24:27.460 Write (01h): Supported LBA-Change 00:24:27.460 Read (02h): Supported 00:24:27.460 Compare (05h): Supported 00:24:27.460 Write Zeroes (08h): Supported LBA-Change 00:24:27.460 Dataset Management (09h): Supported LBA-Change 00:24:27.460 Copy (19h): Supported LBA-Change 00:24:27.460 00:24:27.460 Error Log 00:24:27.460 ========= 00:24:27.460 00:24:27.460 Arbitration 00:24:27.460 =========== 00:24:27.460 Arbitration Burst: 1 00:24:27.460 00:24:27.460 Power Management 00:24:27.460 ================ 00:24:27.460 Number of Power States: 1 00:24:27.460 Current Power State: Power State #0 00:24:27.460 Power State #0: 00:24:27.460 Max Power: 0.00 W 00:24:27.460 Non-Operational State: Operational 00:24:27.460 Entry Latency: Not Reported 00:24:27.460 Exit Latency: Not Reported 00:24:27.460 Relative Read Throughput: 0 00:24:27.460 Relative Read Latency: 0 00:24:27.460 Relative Write Throughput: 0 00:24:27.460 Relative Write Latency: 0 00:24:27.460 Idle Power: Not Reported 00:24:27.460 Active Power: Not Reported 00:24:27.460 Non-Operational Permissive Mode: Not Supported 00:24:27.460 00:24:27.460 Health Information 00:24:27.460 ================== 00:24:27.460 Critical Warnings: 00:24:27.460 Available Spare Space: OK 00:24:27.460 Temperature: OK 00:24:27.460 Device Reliability: OK 00:24:27.460 Read Only: No 00:24:27.460 Volatile Memory Backup: OK 00:24:27.460 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:27.460 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:27.460 Available Spare: 0% 00:24:27.460 Available Spare Threshold: 0% 00:24:27.460 Life Percentage Used:[2024-10-01 16:49:19.039027] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x181d760) 00:24:27.460 [2024-10-01 16:49:19.039039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.460 [2024-10-01 16:49:19.039050] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187df00, cid 7, qid 0 00:24:27.460 [2024-10-01 16:49:19.039203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.460 [2024-10-01 16:49:19.039209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.460 [2024-10-01 16:49:19.039212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187df00) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039241] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:27.460 [2024-10-01 16:49:19.039250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d480) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.460 [2024-10-01 16:49:19.039260] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d600) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.460 [2024-10-01 16:49:19.039269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d780) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.460 [2024-10-01 16:49:19.039278] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.460 [2024-10-01 16:49:19.039290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.460 [2024-10-01 16:49:19.039303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.460 [2024-10-01 16:49:19.039314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.460 [2024-10-01 16:49:19.039427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.460 [2024-10-01 16:49:19.039433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.460 [2024-10-01 16:49:19.039436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.460 [2024-10-01 16:49:19.039459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.460 [2024-10-01 16:49:19.039471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.460 [2024-10-01 16:49:19.039669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.460 [2024-10-01 16:49:19.039675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.460 [2024-10-01 16:49:19.039678] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039682] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039686] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:27.460 [2024-10-01 16:49:19.039690] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:27.460 [2024-10-01 16:49:19.039699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039706] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.460 [2024-10-01 16:49:19.039712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.460 [2024-10-01 16:49:19.039722] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.460 [2024-10-01 16:49:19.039886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.460 [2024-10-01 16:49:19.039891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.460 [2024-10-01 16:49:19.039895] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.039908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.039915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.460 [2024-10-01 16:49:19.039921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.460 [2024-10-01 16:49:19.039930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.460 [2024-10-01 16:49:19.040104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.460 [2024-10-01 16:49:19.040110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.460 [2024-10-01 16:49:19.040114] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.460 [2024-10-01 16:49:19.040117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.460 [2024-10-01 16:49:19.040127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.461 [2024-10-01 16:49:19.040140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.461 [2024-10-01 16:49:19.040152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.461 [2024-10-01 16:49:19.040324] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.461 [2024-10-01 16:49:19.040330] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.461 [2024-10-01 16:49:19.040333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.461 [2024-10-01 16:49:19.040346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040350] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040353] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.461 [2024-10-01 16:49:19.040360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.461 [2024-10-01 16:49:19.040369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.461 [2024-10-01 16:49:19.040542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.461 [2024-10-01 16:49:19.040547] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.461 [2024-10-01 16:49:19.040550] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040554] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.461 [2024-10-01 16:49:19.040563] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.461 [2024-10-01 16:49:19.040577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.461 [2024-10-01 16:49:19.040586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.461 [2024-10-01 16:49:19.040763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.461 [2024-10-01 16:49:19.040769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.461 [2024-10-01 16:49:19.040772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.461 [2024-10-01 16:49:19.040785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.040792] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.461 [2024-10-01 16:49:19.040799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.461 [2024-10-01 16:49:19.040808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.461 [2024-10-01 16:49:19.044978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.461 [2024-10-01 16:49:19.044995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.461 [2024-10-01 16:49:19.044999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.045002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.461 [2024-10-01 16:49:19.045011] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.045015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.045018] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x181d760) 00:24:27.461 [2024-10-01 16:49:19.045025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.461 [2024-10-01 16:49:19.045035] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187d900, cid 3, qid 0 00:24:27.461 [2024-10-01 16:49:19.045197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.461 [2024-10-01 16:49:19.045203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.461 [2024-10-01 16:49:19.045207] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.461 [2024-10-01 16:49:19.045210] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x187d900) on tqpair=0x181d760 00:24:27.461 [2024-10-01 16:49:19.045217] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:27.461 0% 00:24:27.461 Data Units Read: 0 00:24:27.461 Data Units Written: 0 00:24:27.461 Host Read Commands: 0 00:24:27.461 Host Write Commands: 0 00:24:27.461 Controller Busy Time: 0 minutes 00:24:27.461 Power Cycles: 0 00:24:27.461 Power On Hours: 0 hours 00:24:27.461 Unsafe Shutdowns: 0 00:24:27.461 Unrecoverable Media Errors: 0 00:24:27.461 Lifetime Error Log Entries: 0 00:24:27.461 Warning Temperature Time: 0 minutes 00:24:27.461 Critical Temperature Time: 0 minutes 00:24:27.461 00:24:27.461 Number of Queues 00:24:27.461 ================ 00:24:27.461 Number of I/O Submission Queues: 127 00:24:27.461 Number of I/O Completion Queues: 127 00:24:27.461 00:24:27.461 Active Namespaces 00:24:27.461 ================= 00:24:27.461 Namespace ID:1 00:24:27.461 Error Recovery Timeout: Unlimited 00:24:27.461 Command Set Identifier: NVM (00h) 00:24:27.461 Deallocate: Supported 00:24:27.461 Deallocated/Unwritten Error: Not Supported 00:24:27.461 Deallocated Read Value: Unknown 00:24:27.461 Deallocate in Write Zeroes: Not Supported 00:24:27.461 Deallocated Guard Field: 0xFFFF 00:24:27.461 Flush: Supported 00:24:27.461 Reservation: Supported 00:24:27.461 Namespace Sharing Capabilities: Multiple Controllers 00:24:27.461 Size (in LBAs): 131072 (0GiB) 00:24:27.461 Capacity (in LBAs): 131072 (0GiB) 00:24:27.461 Utilization (in LBAs): 131072 (0GiB) 00:24:27.461 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:27.461 EUI64: ABCDEF0123456789 00:24:27.461 UUID: a9925676-b53a-4765-8f58-ba8e9478026d 00:24:27.461 Thin Provisioning: Not Supported 00:24:27.461 Per-NS Atomic Units: Yes 00:24:27.461 Atomic Boundary Size (Normal): 0 00:24:27.461 Atomic Boundary Size (PFail): 0 00:24:27.461 Atomic Boundary Offset: 0 00:24:27.461 Maximum Single Source Range Length: 65535 00:24:27.461 Maximum Copy Length: 65535 00:24:27.461 Maximum Source Range Count: 1 00:24:27.461 NGUID/EUI64 Never Reused: No 00:24:27.461 Namespace Write Protected: No 00:24:27.461 Number of LBA Formats: 1 00:24:27.461 Current LBA Format: LBA Format #00 00:24:27.461 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:27.461 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.461 rmmod nvme_tcp 00:24:27.461 rmmod nvme_fabrics 00:24:27.461 rmmod nvme_keyring 00:24:27.461 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2777365 ']' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2777365 ']' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2777365' 00:24:27.720 killing process with pid 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2777365 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.720 16:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.261 00:24:30.261 real 0m11.543s 00:24:30.261 user 0m8.692s 00:24:30.261 sys 0m5.990s 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.261 ************************************ 00:24:30.261 END TEST nvmf_identify 00:24:30.261 ************************************ 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.261 ************************************ 00:24:30.261 START TEST nvmf_perf 00:24:30.261 ************************************ 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:30.261 * Looking for test storage... 00:24:30.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.261 --rc genhtml_branch_coverage=1 00:24:30.261 --rc genhtml_function_coverage=1 00:24:30.261 --rc genhtml_legend=1 00:24:30.261 --rc geninfo_all_blocks=1 00:24:30.261 --rc geninfo_unexecuted_blocks=1 00:24:30.261 00:24:30.261 ' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.261 --rc genhtml_branch_coverage=1 00:24:30.261 --rc genhtml_function_coverage=1 00:24:30.261 --rc genhtml_legend=1 00:24:30.261 --rc geninfo_all_blocks=1 00:24:30.261 --rc geninfo_unexecuted_blocks=1 00:24:30.261 00:24:30.261 ' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.261 --rc genhtml_branch_coverage=1 00:24:30.261 --rc genhtml_function_coverage=1 00:24:30.261 --rc genhtml_legend=1 00:24:30.261 --rc geninfo_all_blocks=1 00:24:30.261 --rc geninfo_unexecuted_blocks=1 00:24:30.261 00:24:30.261 ' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.261 --rc genhtml_branch_coverage=1 00:24:30.261 --rc genhtml_function_coverage=1 00:24:30.261 --rc genhtml_legend=1 00:24:30.261 --rc geninfo_all_blocks=1 00:24:30.261 --rc geninfo_unexecuted_blocks=1 00:24:30.261 00:24:30.261 ' 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.261 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.262 16:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:38.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:38.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.393 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:38.394 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:38.394 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.394 16:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:24:38.394 00:24:38.394 --- 10.0.0.2 ping statistics --- 00:24:38.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.394 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:24:38.394 00:24:38.394 --- 10.0.0.1 ping statistics --- 00:24:38.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.394 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2781868 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2781868 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2781868 ']' 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.394 [2024-10-01 16:49:29.118306] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:24:38.394 [2024-10-01 16:49:29.118348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.394 [2024-10-01 16:49:29.192676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.394 [2024-10-01 16:49:29.253890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.394 [2024-10-01 16:49:29.253924] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.394 [2024-10-01 16:49:29.253931] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.394 [2024-10-01 16:49:29.253937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.394 [2024-10-01 16:49:29.253943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.394 [2024-10-01 16:49:29.253994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.394 [2024-10-01 16:49:29.254019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.394 [2024-10-01 16:49:29.254143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.394 [2024-10-01 16:49:29.254145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.394 16:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:41.688 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:41.688 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:41.688 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:41.688 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:41.948 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:41.948 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:41.948 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:41.948 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:41.948 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:42.208 [2024-10-01 16:49:33.654076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.208 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.467 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.467 16:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.467 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:42.467 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:42.727 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.986 [2024-10-01 16:49:34.550689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.986 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:43.246 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:43.246 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:43.246 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:43.246 16:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:44.627 Initializing NVMe Controllers 00:24:44.627 Attached to NVMe Controller at 0000:65:00.0 [8086:0a54] 00:24:44.627 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:44.627 Initialization complete. Launching workers. 00:24:44.627 ======================================================== 00:24:44.627 Latency(us) 00:24:44.627 Device Information : IOPS MiB/s Average min max 00:24:44.627 PCIE (0000:65:00.0) NSID 1 from core 0: 86203.59 336.73 370.60 22.15 7266.73 00:24:44.627 ======================================================== 00:24:44.627 Total : 86203.59 336.73 370.60 22.15 7266.73 00:24:44.627 00:24:44.627 16:49:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:46.007 Initializing NVMe Controllers 00:24:46.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:46.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:46.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:46.007 Initialization complete. Launching workers. 00:24:46.007 ======================================================== 00:24:46.007 Latency(us) 00:24:46.007 Device Information : IOPS MiB/s Average min max 00:24:46.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.94 0.38 10562.34 108.27 47033.48 00:24:46.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.95 0.28 14008.12 4977.42 47902.42 00:24:46.007 ======================================================== 00:24:46.007 Total : 168.89 0.66 12030.36 108.27 47902.42 00:24:46.007 00:24:46.007 16:49:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:47.387 Initializing NVMe Controllers 00:24:47.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.387 Initialization complete. Launching workers. 00:24:47.387 ======================================================== 00:24:47.387 Latency(us) 00:24:47.387 Device Information : IOPS MiB/s Average min max 00:24:47.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10544.00 41.19 3035.20 514.60 6525.60 00:24:47.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3781.00 14.77 8509.94 6896.57 16016.36 00:24:47.387 ======================================================== 00:24:47.387 Total : 14325.00 55.96 4480.23 514.60 16016.36 00:24:47.387 00:24:47.387 16:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:47.387 16:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:47.387 16:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.928 Initializing NVMe Controllers 00:24:49.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.928 Controller IO queue size 128, less than required. 00:24:49.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.928 Controller IO queue size 128, less than required. 00:24:49.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.928 Initialization complete. Launching workers. 00:24:49.928 ======================================================== 00:24:49.928 Latency(us) 00:24:49.928 Device Information : IOPS MiB/s Average min max 00:24:49.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1820.49 455.12 71377.73 44191.77 116583.90 00:24:49.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 633.15 158.29 213384.87 107338.47 322937.77 00:24:49.929 ======================================================== 00:24:49.929 Total : 2453.64 613.41 108021.93 44191.77 322937.77 00:24:49.929 00:24:49.929 16:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:49.929 No valid NVMe controllers or AIO or URING devices found 00:24:49.929 Initializing NVMe Controllers 00:24:49.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.929 Controller IO queue size 128, less than required. 00:24:49.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:49.929 Controller IO queue size 128, less than required. 00:24:49.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:49.929 WARNING: Some requested NVMe devices were skipped 00:24:49.929 16:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:52.470 Initializing NVMe Controllers 00:24:52.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.470 Controller IO queue size 128, less than required. 00:24:52.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.470 Controller IO queue size 128, less than required. 00:24:52.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:52.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:52.470 Initialization complete. Launching workers. 00:24:52.470 00:24:52.470 ==================== 00:24:52.470 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:52.470 TCP transport: 00:24:52.470 polls: 26753 00:24:52.470 idle_polls: 16698 00:24:52.470 sock_completions: 10055 00:24:52.470 nvme_completions: 6689 00:24:52.470 submitted_requests: 10026 00:24:52.470 queued_requests: 1 00:24:52.470 00:24:52.470 ==================== 00:24:52.470 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:52.470 TCP transport: 00:24:52.470 polls: 24824 00:24:52.470 idle_polls: 14986 00:24:52.471 sock_completions: 9838 00:24:52.471 nvme_completions: 7117 00:24:52.471 submitted_requests: 10766 00:24:52.471 queued_requests: 1 00:24:52.471 ======================================================== 00:24:52.471 Latency(us) 00:24:52.471 Device Information : IOPS MiB/s Average min max 00:24:52.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1671.68 417.92 78556.19 51062.59 121228.23 00:24:52.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1778.66 444.66 72462.35 41058.95 109521.58 00:24:52.471 ======================================================== 00:24:52.471 Total : 3450.33 862.58 75414.80 41058.95 121228.23 00:24:52.471 00:24:52.471 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:52.471 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.730 rmmod nvme_tcp 00:24:52.730 rmmod nvme_fabrics 00:24:52.730 rmmod nvme_keyring 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2781868 ']' 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2781868 00:24:52.730 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2781868 ']' 00:24:52.731 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2781868 00:24:52.731 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:52.731 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.731 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2781868 00:24:52.991 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.991 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.991 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2781868' 00:24:52.991 killing process with pid 2781868 00:24:52.991 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2781868 00:24:52.991 16:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2781868 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.530 16:49:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.442 00:24:57.442 real 0m27.399s 00:24:57.442 user 1m11.425s 00:24:57.442 sys 0m8.491s 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.442 ************************************ 00:24:57.442 END TEST nvmf_perf 00:24:57.442 ************************************ 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.442 ************************************ 00:24:57.442 START TEST nvmf_fio_host 00:24:57.442 ************************************ 00:24:57.442 16:49:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:57.442 * Looking for test storage... 00:24:57.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.442 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:57.442 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:57.442 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:57.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.703 --rc genhtml_branch_coverage=1 00:24:57.703 --rc genhtml_function_coverage=1 00:24:57.703 --rc genhtml_legend=1 00:24:57.703 --rc geninfo_all_blocks=1 00:24:57.703 --rc geninfo_unexecuted_blocks=1 00:24:57.703 00:24:57.703 ' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:57.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.703 --rc genhtml_branch_coverage=1 00:24:57.703 --rc genhtml_function_coverage=1 00:24:57.703 --rc genhtml_legend=1 00:24:57.703 --rc geninfo_all_blocks=1 00:24:57.703 --rc geninfo_unexecuted_blocks=1 00:24:57.703 00:24:57.703 ' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:57.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.703 --rc genhtml_branch_coverage=1 00:24:57.703 --rc genhtml_function_coverage=1 00:24:57.703 --rc genhtml_legend=1 00:24:57.703 --rc geninfo_all_blocks=1 00:24:57.703 --rc geninfo_unexecuted_blocks=1 00:24:57.703 00:24:57.703 ' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:57.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.703 --rc genhtml_branch_coverage=1 00:24:57.703 --rc genhtml_function_coverage=1 00:24:57.703 --rc genhtml_legend=1 00:24:57.703 --rc geninfo_all_blocks=1 00:24:57.703 --rc geninfo_unexecuted_blocks=1 00:24:57.703 00:24:57.703 ' 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.703 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.704 16:49:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 16:49:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.838 16:49:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.838 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:05.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:05.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:05.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:05.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:25:05.839 00:25:05.839 --- 10.0.0.2 ping statistics --- 00:25:05.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.839 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:25:05.839 00:25:05.839 --- 10.0.0.1 ping statistics --- 00:25:05.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.839 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2788602 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2788602 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2788602 ']' 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.839 16:49:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.839 [2024-10-01 16:49:56.415849] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:25:05.839 [2024-10-01 16:49:56.415897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.839 [2024-10-01 16:49:56.497550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.839 [2024-10-01 16:49:56.560596] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.839 [2024-10-01 16:49:56.560634] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.840 [2024-10-01 16:49:56.560642] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.840 [2024-10-01 16:49:56.560648] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.840 [2024-10-01 16:49:56.560653] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.840 [2024-10-01 16:49:56.560761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.840 [2024-10-01 16:49:56.560777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.840 [2024-10-01 16:49:56.560897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.840 [2024-10-01 16:49:56.560900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:05.840 [2024-10-01 16:49:57.477503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.840 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.099 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:06.099 Malloc1 00:25:06.099 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.359 16:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:06.618 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.879 [2024-10-01 16:49:58.367856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.879 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:07.139 16:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:07.398 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:07.398 fio-3.35 00:25:07.398 Starting 1 thread 00:25:09.950 [2024-10-01 16:50:01.305055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329040 is same with the state(6) to be set 00:25:09.950 [2024-10-01 16:50:01.305100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1329040 is same with the state(6) to be set 00:25:09.950 00:25:09.950 test: (groupid=0, jobs=1): err= 0: pid=2789356: Tue Oct 1 16:50:01 2024 00:25:09.950 read: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(103MiB/2004msec) 00:25:09.950 slat (nsec): min=1882, max=279583, avg=1997.64, stdev=2416.58 00:25:09.950 clat (usec): min=3377, max=9394, avg=5367.60, stdev=404.58 00:25:09.950 lat (usec): min=3379, max=9396, avg=5369.60, stdev=404.74 00:25:09.950 clat percentiles (usec): 00:25:09.950 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:25:09.950 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 00:25:09.950 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5997], 00:25:09.950 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 8291], 99.95th=[ 8455], 00:25:09.950 | 99.99th=[ 9241] 00:25:09.950 bw ( KiB/s): min=51280, max=52824, per=99.91%, avg=52370.00, stdev=733.69, samples=4 00:25:09.950 iops : min=12820, max=13206, avg=13092.50, stdev=183.42, samples=4 00:25:09.950 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(103MiB/2004msec); 0 zone resets 00:25:09.950 slat (nsec): min=1915, max=271172, avg=2061.57, stdev=1844.72 00:25:09.950 clat (usec): min=2579, max=8461, avg=4353.32, stdev=343.55 00:25:09.950 lat (usec): min=2582, max=8463, avg=4355.38, stdev=343.78 00:25:09.950 clat percentiles (usec): 00:25:09.950 | 1.00th=[ 3654], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4113], 00:25:09.950 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:25:09.950 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4817], 00:25:09.950 | 99.00th=[ 5211], 99.50th=[ 5997], 99.90th=[ 7046], 99.95th=[ 7373], 00:25:09.950 | 99.99th=[ 8160] 00:25:09.950 bw ( KiB/s): min=51584, max=52928, per=100.00%, avg=52414.00, stdev=592.09, samples=4 00:25:09.950 iops : min=12896, max=13232, avg=13103.50, stdev=148.02, samples=4 00:25:09.950 lat (msec) : 4=6.01%, 10=93.99% 00:25:09.950 cpu : usr=72.24%, sys=26.71%, ctx=47, majf=0, minf=35 00:25:09.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:09.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:09.950 issued rwts: total=26260,26259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.950 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:09.950 00:25:09.950 Run status group 0 (all jobs): 00:25:09.950 READ: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=103MiB (108MB), run=2004-2004msec 00:25:09.950 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=103MiB (108MB), run=2004-2004msec 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:09.950 16:50:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:10.208 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:10.208 fio-3.35 00:25:10.208 Starting 1 thread 00:25:11.139 [2024-10-01 16:50:02.627602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13298b0 is same with the state(6) to be set 00:25:11.139 [2024-10-01 16:50:02.627646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13298b0 is same with the state(6) to be set 00:25:12.508 00:25:12.508 test: (groupid=0, jobs=1): err= 0: pid=2789839: Tue Oct 1 16:50:04 2024 00:25:12.508 read: IOPS=9674, BW=151MiB/s (159MB/s)(303MiB/2005msec) 00:25:12.508 slat (usec): min=3, max=101, avg= 3.35, stdev= 1.51 00:25:12.508 clat (usec): min=1659, max=53543, avg=8126.76, stdev=3831.51 00:25:12.508 lat (usec): min=1662, max=53546, avg=8130.10, stdev=3831.57 00:25:12.508 clat percentiles (usec): 00:25:12.508 | 1.00th=[ 4228], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6063], 00:25:12.508 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7767], 60.00th=[ 8356], 00:25:12.508 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[11207], 00:25:12.508 | 99.00th=[14615], 99.50th=[46400], 99.90th=[51643], 99.95th=[52691], 00:25:12.508 | 99.99th=[53740] 00:25:12.508 bw ( KiB/s): min=62976, max=94176, per=49.47%, avg=76584.00, stdev=12964.03, samples=4 00:25:12.508 iops : min= 3936, max= 5886, avg=4786.50, stdev=810.25, samples=4 00:25:12.508 write: IOPS=5889, BW=92.0MiB/s (96.5MB/s)(157MiB/1703msec); 0 zone resets 00:25:12.508 slat (usec): min=36, max=359, avg=37.88, stdev= 7.58 00:25:12.508 clat (usec): min=2033, max=15093, avg=8892.00, stdev=1491.03 00:25:12.508 lat (usec): min=2070, max=15130, avg=8929.88, stdev=1492.69 00:25:12.508 clat percentiles (usec): 00:25:12.508 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7701], 00:25:12.508 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:25:12.508 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11469], 00:25:12.508 | 99.00th=[13435], 99.50th=[14091], 99.90th=[14746], 99.95th=[14877], 00:25:12.508 | 99.99th=[15008] 00:25:12.508 bw ( KiB/s): min=66688, max=98112, per=84.73%, avg=79832.00, stdev=13185.23, samples=4 00:25:12.508 iops : min= 4168, max= 6132, avg=4989.50, stdev=824.08, samples=4 00:25:12.508 lat (msec) : 2=0.02%, 4=0.43%, 10=83.64%, 20=15.48%, 50=0.26% 00:25:12.508 lat (msec) : 100=0.17% 00:25:12.508 cpu : usr=83.29%, sys=15.26%, ctx=19, majf=0, minf=61 00:25:12.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:12.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.508 issued rwts: total=19398,10029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.508 00:25:12.508 Run status group 0 (all jobs): 00:25:12.508 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=303MiB (318MB), run=2005-2005msec 00:25:12.508 WRITE: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=157MiB (164MB), run=1703-1703msec 00:25:12.508 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.765 rmmod nvme_tcp 00:25:12.765 rmmod nvme_fabrics 00:25:12.765 rmmod nvme_keyring 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2788602 ']' 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2788602 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2788602 ']' 00:25:12.765 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2788602 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788602 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788602' 00:25:13.023 killing process with pid 2788602 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2788602 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2788602 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:25:13.023 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.024 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.024 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.024 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.024 16:50:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.557 00:25:15.557 real 0m17.744s 00:25:15.557 user 0m55.774s 00:25:15.557 sys 0m7.307s 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.557 ************************************ 00:25:15.557 END TEST nvmf_fio_host 00:25:15.557 ************************************ 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.557 ************************************ 00:25:15.557 START TEST nvmf_failover 00:25:15.557 ************************************ 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:15.557 * Looking for test storage... 00:25:15.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.557 16:50:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:15.557 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:15.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.558 --rc genhtml_branch_coverage=1 00:25:15.558 --rc genhtml_function_coverage=1 00:25:15.558 --rc genhtml_legend=1 00:25:15.558 --rc geninfo_all_blocks=1 00:25:15.558 --rc geninfo_unexecuted_blocks=1 00:25:15.558 00:25:15.558 ' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:15.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.558 --rc genhtml_branch_coverage=1 00:25:15.558 --rc genhtml_function_coverage=1 00:25:15.558 --rc genhtml_legend=1 00:25:15.558 --rc geninfo_all_blocks=1 00:25:15.558 --rc geninfo_unexecuted_blocks=1 00:25:15.558 00:25:15.558 ' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:15.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.558 --rc genhtml_branch_coverage=1 00:25:15.558 --rc genhtml_function_coverage=1 00:25:15.558 --rc genhtml_legend=1 00:25:15.558 --rc geninfo_all_blocks=1 00:25:15.558 --rc geninfo_unexecuted_blocks=1 00:25:15.558 00:25:15.558 ' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:15.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.558 --rc genhtml_branch_coverage=1 00:25:15.558 --rc genhtml_function_coverage=1 00:25:15.558 --rc genhtml_legend=1 00:25:15.558 --rc geninfo_all_blocks=1 00:25:15.558 --rc geninfo_unexecuted_blocks=1 00:25:15.558 00:25:15.558 ' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.558 16:50:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:23.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:23.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:23.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:23.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.687 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:25:23.688 00:25:23.688 --- 10.0.0.2 ping statistics --- 00:25:23.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.688 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:23.688 00:25:23.688 --- 10.0.0.1 ping statistics --- 00:25:23.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.688 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2794320 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2794320 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2794320 ']' 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 [2024-10-01 16:50:14.472342] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:25:23.688 [2024-10-01 16:50:14.472404] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.688 [2024-10-01 16:50:14.534777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.688 [2024-10-01 16:50:14.600182] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.688 [2024-10-01 16:50:14.600220] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.688 [2024-10-01 16:50:14.600226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.688 [2024-10-01 16:50:14.600232] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.688 [2024-10-01 16:50:14.600236] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.688 [2024-10-01 16:50:14.600345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.688 [2024-10-01 16:50:14.600480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.688 [2024-10-01 16:50:14.600482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:23.688 [2024-10-01 16:50:14.920982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.688 16:50:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:23.688 Malloc0 00:25:23.688 16:50:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.947 16:50:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.947 16:50:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.206 [2024-10-01 16:50:15.781589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.206 16:50:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:24.464 [2024-10-01 16:50:15.990086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:24.464 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:24.724 [2024-10-01 16:50:16.186663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2794648 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2794648 /var/tmp/bdevperf.sock 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2794648 ']' 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.724 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:24.983 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.983 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:24.984 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.242 NVMe0n1 00:25:25.242 16:50:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.501 00:25:25.501 16:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2794783 00:25:25.501 16:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.501 16:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:26.881 16:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.881 [2024-10-01 16:50:18.357189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 [2024-10-01 16:50:18.357295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d82a0 is same with the state(6) to be set 00:25:26.881 16:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:30.167 16:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:30.167 00:25:30.167 16:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.427 [2024-10-01 16:50:21.878446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 [2024-10-01 16:50:21.878645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8d50 is same with the state(6) to be set 00:25:30.427 16:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:33.789 16:50:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.789 [2024-10-01 16:50:25.096804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.789 16:50:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:34.769 16:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:34.769 [2024-10-01 16:50:26.307922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.307990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 [2024-10-01 16:50:26.308057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e260 is same with the state(6) to be set 00:25:34.769 16:50:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2794783 00:25:41.347 { 00:25:41.347 "results": [ 00:25:41.347 { 00:25:41.347 "job": "NVMe0n1", 00:25:41.347 "core_mask": "0x1", 00:25:41.347 "workload": "verify", 00:25:41.347 "status": "finished", 00:25:41.347 "verify_range": { 00:25:41.347 "start": 0, 00:25:41.347 "length": 16384 00:25:41.347 }, 00:25:41.347 "queue_depth": 128, 00:25:41.347 "io_size": 4096, 00:25:41.347 "runtime": 15.00497, 00:25:41.347 "iops": 11464.468106234135, 00:25:41.347 "mibps": 44.78307853997709, 00:25:41.347 "io_failed": 5525, 00:25:41.347 "io_timeout": 0, 00:25:41.347 "avg_latency_us": 10792.156802200216, 00:25:41.347 "min_latency_us": 500.9723076923077, 00:25:41.347 "max_latency_us": 29239.138461538463 00:25:41.347 } 00:25:41.347 ], 00:25:41.347 "core_count": 1 00:25:41.347 } 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2794648 ']' 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794648' 00:25:41.347 killing process with pid 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2794648 00:25:41.347 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.347 [2024-10-01 16:50:16.250531] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:25:41.347 [2024-10-01 16:50:16.250587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2794648 ] 00:25:41.347 [2024-10-01 16:50:16.326495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.347 [2024-10-01 16:50:16.387397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.347 Running I/O for 15 seconds... 00:25:41.347 11530.00 IOPS, 45.04 MiB/s [2024-10-01 16:50:18.357925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.347 [2024-10-01 16:50:18.357958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.347 [2024-10-01 16:50:18.357979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.347 [2024-10-01 16:50:18.357988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.347 [2024-10-01 16:50:18.357997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.347 [2024-10-01 16:50:18.358005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.348 [2024-10-01 16:50:18.358635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.348 [2024-10-01 16:50:18.358643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.358989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.358999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.349 [2024-10-01 16:50:18.359282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.349 [2024-10-01 16:50:18.359289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.350 [2024-10-01 16:50:18.359765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.350 [2024-10-01 16:50:18.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.350 [2024-10-01 16:50:18.359922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.359931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:18.359937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.359948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:18.359954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.359963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:18.359973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.359982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:18.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.351 [2024-10-01 16:50:18.360017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.351 [2024-10-01 16:50:18.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:25:41.351 [2024-10-01 16:50:18.360030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360067] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd81440 was disconnected and freed. reset controller. 00:25:41.351 [2024-10-01 16:50:18.360076] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:41.351 [2024-10-01 16:50:18.360095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.351 [2024-10-01 16:50:18.360103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.351 [2024-10-01 16:50:18.360117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.351 [2024-10-01 16:50:18.360132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.351 [2024-10-01 16:50:18.360146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:18.360153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.351 [2024-10-01 16:50:18.363478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.351 [2024-10-01 16:50:18.363501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd60da0 (9): Bad file descriptor 00:25:41.351 [2024-10-01 16:50:18.407090] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.351 11362.00 IOPS, 44.38 MiB/s 11437.67 IOPS, 44.68 MiB/s 11492.50 IOPS, 44.89 MiB/s [2024-10-01 16:50:21.879959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.351 [2024-10-01 16:50:21.880140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.351 [2024-10-01 16:50:21.880431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.351 [2024-10-01 16:50:21.880438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.352 [2024-10-01 16:50:21.880945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.352 [2024-10-01 16:50:21.880961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.352 [2024-10-01 16:50:21.880974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.880981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.880990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.880996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.353 [2024-10-01 16:50:21.881375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.353 [2024-10-01 16:50:21.881446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.353 [2024-10-01 16:50:21.881463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.353 [2024-10-01 16:50:21.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.353 [2024-10-01 16:50:21.881491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd60da0 is same with the state(6) to be set 00:25:41.353 [2024-10-01 16:50:21.881688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42776 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42784 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42792 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42800 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.353 [2024-10-01 16:50:21.881804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42808 len:8 PRP1 0x0 PRP2 0x0 00:25:41.353 [2024-10-01 16:50:21.881811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.353 [2024-10-01 16:50:21.881818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.353 [2024-10-01 16:50:21.881823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42816 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.881975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.881981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.881986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42824 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.881993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42832 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42840 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42848 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42856 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42864 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42872 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42880 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42888 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42896 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42904 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42912 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42936 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.354 [2024-10-01 16:50:21.882410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.354 [2024-10-01 16:50:21.882415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:25:41.354 [2024-10-01 16:50:21.882422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.354 [2024-10-01 16:50:21.882430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.882436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.882442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.882448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.882455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43000 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43008 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43016 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43032 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43040 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43048 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.892975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.892980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.892986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42400 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.892993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43056 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42408 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42416 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42424 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42432 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42440 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42448 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42456 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43064 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43072 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:25:41.355 [2024-10-01 16:50:21.893282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.355 [2024-10-01 16:50:21.893291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.355 [2024-10-01 16:50:21.893297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.355 [2024-10-01 16:50:21.893303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43208 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43216 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:25:41.356 [2024-10-01 16:50:21.893831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.356 [2024-10-01 16:50:21.893838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.356 [2024-10-01 16:50:21.893844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.356 [2024-10-01 16:50:21.893850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.893864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.893870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.893876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.893890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.893897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.893903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.893918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.893924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.893945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.893950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.893956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.893974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.893979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.893985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.893992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42464 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42472 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42480 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42488 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42496 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42504 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.894392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.894397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.894403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42512 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.894410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.901943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.901979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.901988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42520 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.901997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.902005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.902010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.357 [2024-10-01 16:50:21.902016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42528 len:8 PRP1 0x0 PRP2 0x0 00:25:41.357 [2024-10-01 16:50:21.902023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.357 [2024-10-01 16:50:21.902031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.357 [2024-10-01 16:50:21.902037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42536 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42544 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42552 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42560 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42568 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42576 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42584 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42592 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42600 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42608 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42616 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42624 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42632 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42640 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42648 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42656 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42664 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42672 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42680 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42688 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42696 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42704 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.358 [2024-10-01 16:50:21.902736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.358 [2024-10-01 16:50:21.902743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42712 len:8 PRP1 0x0 PRP2 0x0 00:25:41.358 [2024-10-01 16:50:21.902752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.358 [2024-10-01 16:50:21.902762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42720 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42728 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42736 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42744 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42752 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42760 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.902966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.359 [2024-10-01 16:50:21.902978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.359 [2024-10-01 16:50:21.902986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:25:41.359 [2024-10-01 16:50:21.902996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:21.903042] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd83230 was disconnected and freed. reset controller. 00:25:41.359 [2024-10-01 16:50:21.903053] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:41.359 [2024-10-01 16:50:21.903064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.359 [2024-10-01 16:50:21.903118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd60da0 (9): Bad file descriptor 00:25:41.359 [2024-10-01 16:50:21.907637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.359 [2024-10-01 16:50:21.943171] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.359 11389.20 IOPS, 44.49 MiB/s 11405.00 IOPS, 44.55 MiB/s 11408.86 IOPS, 44.57 MiB/s 11440.88 IOPS, 44.69 MiB/s 11463.67 IOPS, 44.78 MiB/s [2024-10-01 16:50:26.310371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.359 [2024-10-01 16:50:26.310406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.359 [2024-10-01 16:50:26.310801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.359 [2024-10-01 16:50:26.310809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.360 [2024-10-01 16:50:26.310877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.310985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.310992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.360 [2024-10-01 16:50:26.311375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.360 [2024-10-01 16:50:26.311381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.312003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.361 [2024-10-01 16:50:26.312010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.361 [2024-10-01 16:50:26.312019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.362 [2024-10-01 16:50:26.312370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.362 [2024-10-01 16:50:26.312401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63544 len:8 PRP1 0x0 PRP2 0x0 00:25:41.362 [2024-10-01 16:50:26.312407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.362 [2024-10-01 16:50:26.312423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.362 [2024-10-01 16:50:26.312429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63552 len:8 PRP1 0x0 PRP2 0x0 00:25:41.362 [2024-10-01 16:50:26.312436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.362 [2024-10-01 16:50:26.312449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.362 [2024-10-01 16:50:26.312455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63560 len:8 PRP1 0x0 PRP2 0x0 00:25:41.362 [2024-10-01 16:50:26.312462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312498] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd90400 was disconnected and freed. reset controller. 00:25:41.362 [2024-10-01 16:50:26.312506] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:41.362 [2024-10-01 16:50:26.312526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.362 [2024-10-01 16:50:26.312534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.362 [2024-10-01 16:50:26.312549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.362 [2024-10-01 16:50:26.312564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.362 [2024-10-01 16:50:26.312578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.362 [2024-10-01 16:50:26.312586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.362 [2024-10-01 16:50:26.312617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd60da0 (9): Bad file descriptor 00:25:41.362 [2024-10-01 16:50:26.315870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.362 [2024-10-01 16:50:26.389935] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.362 11418.00 IOPS, 44.60 MiB/s 11439.36 IOPS, 44.69 MiB/s 11451.58 IOPS, 44.73 MiB/s 11450.69 IOPS, 44.73 MiB/s 11458.79 IOPS, 44.76 MiB/s 00:25:41.362 Latency(us) 00:25:41.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.362 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:41.362 Verification LBA range: start 0x0 length 0x4000 00:25:41.362 NVMe0n1 : 15.00 11464.47 44.78 368.21 0.00 10792.16 500.97 29239.14 00:25:41.362 =================================================================================================================== 00:25:41.362 Total : 11464.47 44.78 368.21 0.00 10792.16 500.97 29239.14 00:25:41.362 Received shutdown signal, test time was about 15.000000 seconds 00:25:41.362 00:25:41.362 Latency(us) 00:25:41.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.362 =================================================================================================================== 00:25:41.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.362 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:41.362 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:41.362 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:41.362 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2797392 00:25:41.362 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2797392 /var/tmp/bdevperf.sock 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2797392 ']' 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.363 [2024-10-01 16:50:32.979624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.363 16:50:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:41.622 [2024-10-01 16:50:33.188155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:41.622 16:50:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.194 NVMe0n1 00:25:42.194 16:50:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.194 00:25:42.194 16:50:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.453 00:25:42.453 16:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:42.453 16:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:42.712 16:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.970 16:50:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:46.253 16:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.254 16:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:46.254 16:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2798221 00:25:46.254 16:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:46.254 16:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2798221 00:25:47.632 { 00:25:47.632 "results": [ 00:25:47.632 { 00:25:47.632 "job": "NVMe0n1", 00:25:47.632 "core_mask": "0x1", 00:25:47.632 "workload": "verify", 00:25:47.632 "status": "finished", 00:25:47.632 "verify_range": { 00:25:47.632 "start": 0, 00:25:47.632 "length": 16384 00:25:47.632 }, 00:25:47.632 "queue_depth": 128, 00:25:47.632 "io_size": 4096, 00:25:47.632 "runtime": 1.010963, 00:25:47.632 "iops": 11562.243128581362, 00:25:47.632 "mibps": 45.165012221020945, 00:25:47.632 "io_failed": 0, 00:25:47.632 "io_timeout": 0, 00:25:47.632 "avg_latency_us": 11024.194627690727, 00:25:47.632 "min_latency_us": 2129.92, 00:25:47.632 "max_latency_us": 10233.69846153846 00:25:47.632 } 00:25:47.632 ], 00:25:47.632 "core_count": 1 00:25:47.632 } 00:25:47.632 16:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:47.632 [2024-10-01 16:50:32.576958] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:25:47.632 [2024-10-01 16:50:32.577025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797392 ] 00:25:47.632 [2024-10-01 16:50:32.656418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.632 [2024-10-01 16:50:32.717696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.632 [2024-10-01 16:50:34.531515] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:47.632 [2024-10-01 16:50:34.531558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.632 [2024-10-01 16:50:34.531569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.632 [2024-10-01 16:50:34.531578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.632 [2024-10-01 16:50:34.531585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.632 [2024-10-01 16:50:34.531592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.632 [2024-10-01 16:50:34.531599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.632 [2024-10-01 16:50:34.531607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.632 [2024-10-01 16:50:34.531613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.632 [2024-10-01 16:50:34.531621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:47.632 [2024-10-01 16:50:34.531647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:47.632 [2024-10-01 16:50:34.531660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173da0 (9): Bad file descriptor 00:25:47.632 [2024-10-01 16:50:34.544527] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.632 Running I/O for 1 seconds... 00:25:47.632 11554.00 IOPS, 45.13 MiB/s 00:25:47.632 Latency(us) 00:25:47.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.632 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.632 Verification LBA range: start 0x0 length 0x4000 00:25:47.632 NVMe0n1 : 1.01 11562.24 45.17 0.00 0.00 11024.19 2129.92 10233.70 00:25:47.632 =================================================================================================================== 00:25:47.632 Total : 11562.24 45.17 0.00 0.00 11024.19 2129.92 10233.70 00:25:47.632 16:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:47.632 16:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:47.632 16:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:47.632 16:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:47.632 16:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:47.890 16:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.148 16:50:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2797392 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2797392 ']' 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2797392 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.435 16:50:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2797392 00:25:51.435 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:51.435 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:51.435 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2797392' 00:25:51.435 killing process with pid 2797392 00:25:51.435 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2797392 00:25:51.435 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2797392 00:25:51.694 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:51.694 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.694 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:51.694 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.695 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.695 rmmod nvme_tcp 00:25:51.953 rmmod nvme_fabrics 00:25:51.953 rmmod nvme_keyring 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2794320 ']' 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2794320 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2794320 ']' 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2794320 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2794320 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2794320' 00:25:51.953 killing process with pid 2794320 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2794320 00:25:51.953 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2794320 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.213 16:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.116 16:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.117 00:25:54.117 real 0m38.905s 00:25:54.117 user 1m59.754s 00:25:54.117 sys 0m8.399s 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.117 ************************************ 00:25:54.117 END TEST nvmf_failover 00:25:54.117 ************************************ 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:54.117 16:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.377 ************************************ 00:25:54.377 START TEST nvmf_host_discovery 00:25:54.377 ************************************ 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:54.377 * Looking for test storage... 00:25:54.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:54.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.377 --rc genhtml_branch_coverage=1 00:25:54.377 --rc genhtml_function_coverage=1 00:25:54.377 --rc genhtml_legend=1 00:25:54.377 --rc geninfo_all_blocks=1 00:25:54.377 --rc geninfo_unexecuted_blocks=1 00:25:54.377 00:25:54.377 ' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:54.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.377 --rc genhtml_branch_coverage=1 00:25:54.377 --rc genhtml_function_coverage=1 00:25:54.377 --rc genhtml_legend=1 00:25:54.377 --rc geninfo_all_blocks=1 00:25:54.377 --rc geninfo_unexecuted_blocks=1 00:25:54.377 00:25:54.377 ' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:54.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.377 --rc genhtml_branch_coverage=1 00:25:54.377 --rc genhtml_function_coverage=1 00:25:54.377 --rc genhtml_legend=1 00:25:54.377 --rc geninfo_all_blocks=1 00:25:54.377 --rc geninfo_unexecuted_blocks=1 00:25:54.377 00:25:54.377 ' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:54.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.377 --rc genhtml_branch_coverage=1 00:25:54.377 --rc genhtml_function_coverage=1 00:25:54.377 --rc genhtml_legend=1 00:25:54.377 --rc geninfo_all_blocks=1 00:25:54.377 --rc geninfo_unexecuted_blocks=1 00:25:54.377 00:25:54.377 ' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.377 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.378 16:50:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:54.378 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:00.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:00.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:00.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:00.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:00.952 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.953 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:26:01.212 00:26:01.212 --- 10.0.0.2 ping statistics --- 00:26:01.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.212 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:26:01.212 00:26:01.212 --- 10.0.0.1 ping statistics --- 00:26:01.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.212 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2803052 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2803052 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2803052 ']' 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.212 16:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.472 [2024-10-01 16:50:52.904657] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:26:01.472 [2024-10-01 16:50:52.904720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.472 [2024-10-01 16:50:52.966646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.472 [2024-10-01 16:50:53.032600] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.472 [2024-10-01 16:50:53.032639] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.472 [2024-10-01 16:50:53.032645] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.472 [2024-10-01 16:50:53.032650] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.472 [2024-10-01 16:50:53.032655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.472 [2024-10-01 16:50:53.032676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.472 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.731 [2024-10-01 16:50:53.156056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.731 [2024-10-01 16:50:53.164212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.731 null0 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.731 null1 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.731 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2803159 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2803159 /tmp/host.sock 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2803159 ']' 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:01.732 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.732 16:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.732 [2024-10-01 16:50:53.255611] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:26:01.732 [2024-10-01 16:50:53.255668] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803159 ] 00:26:01.732 [2024-10-01 16:50:53.331851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.732 [2024-10-01 16:50:53.392993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.670 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.671 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 [2024-10-01 16:50:54.431405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:02.930 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:02.931 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.189 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:03.189 16:50:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:03.756 [2024-10-01 16:50:55.174162] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.756 [2024-10-01 16:50:55.174183] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.756 [2024-10-01 16:50:55.174196] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.756 [2024-10-01 16:50:55.261459] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:04.016 [2024-10-01 16:50:55.445960] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:04.016 [2024-10-01 16:50:55.445990] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.016 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:04.280 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 [2024-10-01 16:50:55.943292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:04.281 [2024-10-01 16:50:55.943856] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:04.281 [2024-10-01 16:50:55.943881] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.281 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:04.541 16:50:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.541 [2024-10-01 16:50:56.031544] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:04.541 16:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:04.800 [2024-10-01 16:50:56.337091] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:04.800 [2024-10-01 16:50:56.337109] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.800 [2024-10-01 16:50:56.337114] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.737 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.738 [2024-10-01 16:50:57.211423] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:05.738 [2024-10-01 16:50:57.211445] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:05.738 [2024-10-01 16:50:57.218062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.738 [2024-10-01 16:50:57.218081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.738 [2024-10-01 16:50:57.218090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.738 [2024-10-01 16:50:57.218102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.738 [2024-10-01 16:50:57.218110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.738 [2024-10-01 16:50:57.218117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.738 [2024-10-01 16:50:57.218125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.738 [2024-10-01 16:50:57.218132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.738 [2024-10-01 16:50:57.218139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.738 [2024-10-01 16:50:57.228067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.738 [2024-10-01 16:50:57.238105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.738 [2024-10-01 16:50:57.238436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.738 [2024-10-01 16:50:57.238451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.738 [2024-10-01 16:50:57.238459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.738 [2024-10-01 16:50:57.238470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.738 [2024-10-01 16:50:57.238481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.738 [2024-10-01 16:50:57.238488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.738 [2024-10-01 16:50:57.238496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.738 [2024-10-01 16:50:57.238507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:05.738 [2024-10-01 16:50:57.248158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.738 [2024-10-01 16:50:57.248447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.738 [2024-10-01 16:50:57.248463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.738 [2024-10-01 16:50:57.248470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.738 [2024-10-01 16:50:57.248481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.738 [2024-10-01 16:50:57.248491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.738 [2024-10-01 16:50:57.248497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.738 [2024-10-01 16:50:57.248504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.738 [2024-10-01 16:50:57.248514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.738 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.738 [2024-10-01 16:50:57.258211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.738 [2024-10-01 16:50:57.258498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.738 [2024-10-01 16:50:57.258512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.738 [2024-10-01 16:50:57.258519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.738 [2024-10-01 16:50:57.258530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.738 [2024-10-01 16:50:57.258547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.738 [2024-10-01 16:50:57.258554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.738 [2024-10-01 16:50:57.258560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.738 [2024-10-01 16:50:57.258571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 [2024-10-01 16:50:57.268266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.268572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.268584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.268591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.268601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.268623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.268630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.268637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.268647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.739 [2024-10-01 16:50:57.278315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.278600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.278610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.278617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.278628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.278638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.278644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.278651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.278667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 [2024-10-01 16:50:57.288365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.288650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.288661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.288668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.288678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.288701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.288708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.288715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.288725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 [2024-10-01 16:50:57.298414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.298691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.298703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.298710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.298720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.298736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.298743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.298749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.298760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:05.739 [2024-10-01 16:50:57.308466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.308777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.308788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.308795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.308806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.308962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.308976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.308984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.308996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.739 [2024-10-01 16:50:57.318518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.318829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.318840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.318847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.318858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.318874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.318880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.318887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.318897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 [2024-10-01 16:50:57.328568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.328866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.328881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.328887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.328898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.328921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.328928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.328934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.328944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 [2024-10-01 16:50:57.338618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:05.739 [2024-10-01 16:50:57.338931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.739 [2024-10-01 16:50:57.338942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778d90 with addr=10.0.0.2, port=4420 00:26:05.739 [2024-10-01 16:50:57.338948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778d90 is same with the state(6) to be set 00:26:05.739 [2024-10-01 16:50:57.338959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1778d90 (9): Bad file descriptor 00:26:05.739 [2024-10-01 16:50:57.338988] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:05.739 [2024-10-01 16:50:57.339003] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.739 [2024-10-01 16:50:57.339021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:05.739 [2024-10-01 16:50:57.339029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:05.739 [2024-10-01 16:50:57.339036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:05.739 [2024-10-01 16:50:57.339049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:05.739 16:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:06.675 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:06.675 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:06.675 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:06.934 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.935 16:50:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.314 [2024-10-01 16:50:59.656833] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:08.314 [2024-10-01 16:50:59.656849] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:08.314 [2024-10-01 16:50:59.656861] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:08.314 [2024-10-01 16:50:59.786272] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:08.314 [2024-10-01 16:50:59.892480] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.314 [2024-10-01 16:50:59.892508] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.314 request: 00:26:08.314 { 00:26:08.314 "name": "nvme", 00:26:08.314 "trtype": "tcp", 00:26:08.314 "traddr": "10.0.0.2", 00:26:08.314 "adrfam": "ipv4", 00:26:08.314 "trsvcid": "8009", 00:26:08.314 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.314 "wait_for_attach": true, 00:26:08.314 "method": "bdev_nvme_start_discovery", 00:26:08.314 "req_id": 1 00:26:08.314 } 00:26:08.314 Got JSON-RPC error response 00:26:08.314 response: 00:26:08.314 { 00:26:08.314 "code": -17, 00:26:08.314 "message": "File exists" 00:26:08.314 } 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.314 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.315 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.577 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.577 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.577 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:08.577 16:50:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.577 request: 00:26:08.577 { 00:26:08.577 "name": "nvme_second", 00:26:08.577 "trtype": "tcp", 00:26:08.577 "traddr": "10.0.0.2", 00:26:08.577 "adrfam": "ipv4", 00:26:08.577 "trsvcid": "8009", 00:26:08.577 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.577 "wait_for_attach": true, 00:26:08.577 "method": "bdev_nvme_start_discovery", 00:26:08.577 "req_id": 1 00:26:08.577 } 00:26:08.577 Got JSON-RPC error response 00:26:08.577 response: 00:26:08.577 { 00:26:08.577 "code": -17, 00:26:08.577 "message": "File exists" 00:26:08.577 } 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.577 16:51:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.513 [2024-10-01 16:51:01.119765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.513 [2024-10-01 16:51:01.119800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778b60 with addr=10.0.0.2, port=8010 00:26:09.513 [2024-10-01 16:51:01.119815] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:09.513 [2024-10-01 16:51:01.119823] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:09.513 [2024-10-01 16:51:01.119830] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:10.447 [2024-10-01 16:51:02.122214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:10.447 [2024-10-01 16:51:02.122236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1778b60 with addr=10.0.0.2, port=8010 00:26:10.447 [2024-10-01 16:51:02.122247] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:10.447 [2024-10-01 16:51:02.122253] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:10.447 [2024-10-01 16:51:02.122259] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:11.822 [2024-10-01 16:51:03.124254] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:11.822 request: 00:26:11.822 { 00:26:11.822 "name": "nvme_second", 00:26:11.822 "trtype": "tcp", 00:26:11.822 "traddr": "10.0.0.2", 00:26:11.822 "adrfam": "ipv4", 00:26:11.822 "trsvcid": "8010", 00:26:11.822 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:11.822 "wait_for_attach": false, 00:26:11.822 "attach_timeout_ms": 3000, 00:26:11.822 "method": "bdev_nvme_start_discovery", 00:26:11.822 "req_id": 1 00:26:11.822 } 00:26:11.822 Got JSON-RPC error response 00:26:11.822 response: 00:26:11.822 { 00:26:11.822 "code": -110, 00:26:11.822 "message": "Connection timed out" 00:26:11.822 } 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2803159 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.822 rmmod nvme_tcp 00:26:11.822 rmmod nvme_fabrics 00:26:11.822 rmmod nvme_keyring 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2803052 ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2803052 ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2803052' 00:26:11.822 killing process with pid 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2803052 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.822 16:51:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.360 00:26:14.360 real 0m19.697s 00:26:14.360 user 0m24.087s 00:26:14.360 sys 0m6.485s 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.360 ************************************ 00:26:14.360 END TEST nvmf_host_discovery 00:26:14.360 ************************************ 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.360 ************************************ 00:26:14.360 START TEST nvmf_host_multipath_status 00:26:14.360 ************************************ 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:14.360 * Looking for test storage... 00:26:14.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.360 --rc genhtml_branch_coverage=1 00:26:14.360 --rc genhtml_function_coverage=1 00:26:14.360 --rc genhtml_legend=1 00:26:14.360 --rc geninfo_all_blocks=1 00:26:14.360 --rc geninfo_unexecuted_blocks=1 00:26:14.360 00:26:14.360 ' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.360 --rc genhtml_branch_coverage=1 00:26:14.360 --rc genhtml_function_coverage=1 00:26:14.360 --rc genhtml_legend=1 00:26:14.360 --rc geninfo_all_blocks=1 00:26:14.360 --rc geninfo_unexecuted_blocks=1 00:26:14.360 00:26:14.360 ' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.360 --rc genhtml_branch_coverage=1 00:26:14.360 --rc genhtml_function_coverage=1 00:26:14.360 --rc genhtml_legend=1 00:26:14.360 --rc geninfo_all_blocks=1 00:26:14.360 --rc geninfo_unexecuted_blocks=1 00:26:14.360 00:26:14.360 ' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:14.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.360 --rc genhtml_branch_coverage=1 00:26:14.360 --rc genhtml_function_coverage=1 00:26:14.360 --rc genhtml_legend=1 00:26:14.360 --rc geninfo_all_blocks=1 00:26:14.360 --rc geninfo_unexecuted_blocks=1 00:26:14.360 00:26:14.360 ' 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.360 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.361 16:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.486 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:22.487 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:22.487 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:22.487 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:22.487 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:22.487 16:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:22.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:26:22.487 00:26:22.487 --- 10.0.0.2 ping statistics --- 00:26:22.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.487 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:26:22.487 00:26:22.487 --- 10.0.0.1 ping statistics --- 00:26:22.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.487 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2809459 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2809459 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2809459 ']' 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.487 16:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.487 [2024-10-01 16:51:13.189481] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:26:22.488 [2024-10-01 16:51:13.189545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.488 [2024-10-01 16:51:13.279746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.488 [2024-10-01 16:51:13.372222] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.488 [2024-10-01 16:51:13.372284] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.488 [2024-10-01 16:51:13.372293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.488 [2024-10-01 16:51:13.372300] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.488 [2024-10-01 16:51:13.372305] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.488 [2024-10-01 16:51:13.372453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.488 [2024-10-01 16:51:13.372458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2809459 00:26:22.488 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:22.747 [2024-10-01 16:51:14.306783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.747 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:23.007 Malloc0 00:26:23.007 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:23.266 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.266 16:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.526 [2024-10-01 16:51:15.136596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.526 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:23.785 [2024-10-01 16:51:15.349155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2809974 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2809974 /var/tmp/bdevperf.sock 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2809974 ']' 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:23.785 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:24.045 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:24.045 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:24.045 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:24.303 16:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:24.561 Nvme0n1 00:26:24.561 16:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:25.129 Nvme0n1 00:26:25.129 16:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:25.129 16:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:27.035 16:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:27.035 16:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:27.294 16:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.556 16:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:28.512 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:28.512 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.512 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.512 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.770 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.770 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.770 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.770 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.028 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.028 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.028 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.028 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.286 16:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.549 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.549 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.549 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.549 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.838 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.838 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:29.838 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:30.149 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.149 16:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:31.557 16:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:31.557 16:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.557 16:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.557 16:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.557 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.817 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.817 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.817 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.817 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.078 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.078 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:32.078 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.078 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.338 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.338 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.338 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.338 16:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.597 16:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.597 16:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:32.597 16:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.597 16:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:32.860 16:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:33.798 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:33.798 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.058 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.317 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.317 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.317 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.317 16:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.576 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.576 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.576 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.576 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.835 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.835 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.835 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.835 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.835 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.836 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.836 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.836 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.095 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.095 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:35.095 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.355 16:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:35.615 16:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:36.549 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:36.549 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.549 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.549 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.808 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.808 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:36.808 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.808 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.066 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.066 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.066 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.066 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.326 16:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.584 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.584 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:37.584 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.584 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.842 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.842 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:37.842 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:38.101 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.359 16:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:39.295 16:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:39.295 16:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.295 16:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.295 16:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.555 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.555 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:39.555 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.555 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.813 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.813 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.813 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.813 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.073 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.334 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.334 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:40.334 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.334 16:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.593 16:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.593 16:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:40.593 16:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:40.853 16:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.113 16:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:42.054 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:42.054 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:42.054 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.054 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.315 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.315 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.315 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.315 16:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.575 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.835 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.835 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:42.836 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.836 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.096 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.096 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.096 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.096 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.357 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.357 16:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:43.617 16:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:43.618 16:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:43.878 16:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:43.878 16:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.263 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.524 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.524 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.524 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.524 16:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.524 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.524 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.524 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.524 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.784 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.784 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.784 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.784 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.044 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.044 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.044 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.044 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.305 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.305 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:46.305 16:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.564 16:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.564 16:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.945 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.946 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.946 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.206 16:51:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.467 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.467 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.467 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.467 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.727 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.727 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.727 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.727 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.988 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.988 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:48.988 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.247 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.507 16:51:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:50.448 16:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:50.448 16:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.448 16:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.448 16:51:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.448 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.448 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.448 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.448 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.708 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.708 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.708 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.708 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.968 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.968 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.968 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.968 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.227 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.227 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.227 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.227 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.486 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.486 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.486 16:51:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.486 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.747 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.747 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:51.747 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.007 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:52.007 16:51:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.388 16:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.648 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.907 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.907 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.907 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.907 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.167 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.167 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:54.167 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.167 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2809974 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2809974 ']' 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2809974 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:54.428 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809974 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809974' 00:26:54.429 killing process with pid 2809974 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2809974 00:26:54.429 16:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2809974 00:26:54.429 { 00:26:54.429 "results": [ 00:26:54.429 { 00:26:54.429 "job": "Nvme0n1", 00:26:54.429 "core_mask": "0x4", 00:26:54.429 "workload": "verify", 00:26:54.429 "status": "terminated", 00:26:54.429 "verify_range": { 00:26:54.429 "start": 0, 00:26:54.429 "length": 16384 00:26:54.429 }, 00:26:54.429 "queue_depth": 128, 00:26:54.429 "io_size": 4096, 00:26:54.429 "runtime": 29.208782, 00:26:54.429 "iops": 11269.316194013156, 00:26:54.429 "mibps": 44.02076638286389, 00:26:54.429 "io_failed": 0, 00:26:54.429 "io_timeout": 0, 00:26:54.429 "avg_latency_us": 11322.30792979583, 00:26:54.429 "min_latency_us": 601.7969230769231, 00:26:54.429 "max_latency_us": 3084426.633846154 00:26:54.429 } 00:26:54.429 ], 00:26:54.429 "core_count": 1 00:26:54.429 } 00:26:54.717 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2809974 00:26:54.717 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:54.717 [2024-10-01 16:51:15.414677] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:26:54.717 [2024-10-01 16:51:15.414734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809974 ] 00:26:54.717 [2024-10-01 16:51:15.465620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.717 [2024-10-01 16:51:15.518281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.717 [2024-10-01 16:51:16.574932] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:26:54.717 Running I/O for 90 seconds... 00:26:54.717 10139.00 IOPS, 39.61 MiB/s 10257.00 IOPS, 40.07 MiB/s 10323.33 IOPS, 40.33 MiB/s 10316.25 IOPS, 40.30 MiB/s 10383.40 IOPS, 40.56 MiB/s 10737.83 IOPS, 41.94 MiB/s 10995.00 IOPS, 42.95 MiB/s 11144.00 IOPS, 43.53 MiB/s 11059.44 IOPS, 43.20 MiB/s 10980.60 IOPS, 42.89 MiB/s 10919.18 IOPS, 42.65 MiB/s 10871.58 IOPS, 42.47 MiB/s [2024-10-01 16:51:29.608175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.717 [2024-10-01 16:51:29.608397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.717 [2024-10-01 16:51:29.608402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.608985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.608996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.718 [2024-10-01 16:51:29.609173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.718 [2024-10-01 16:51:29.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.609990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.609996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.719 [2024-10-01 16:51:29.610190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.719 [2024-10-01 16:51:29.610201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.610879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.610992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.610997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.720 [2024-10-01 16:51:29.611162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.720 [2024-10-01 16:51:29.611189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.720 [2024-10-01 16:51:29.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.611205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.611211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.611222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.611229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.611239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.611245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.611255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.611262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.611273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.611278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.612986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.612997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.721 [2024-10-01 16:51:29.613168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.721 [2024-10-01 16:51:29.613178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.613488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.613499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.623959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.722 [2024-10-01 16:51:29.624724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.722 [2024-10-01 16:51:29.624730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.624991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.624997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.723 [2024-10-01 16:51:29.625319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.723 [2024-10-01 16:51:29.625362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.723 [2024-10-01 16:51:29.625368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.724 [2024-10-01 16:51:29.625592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.625954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.625963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.626550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.626561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.626572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.724 [2024-10-01 16:51:29.626578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.724 [2024-10-01 16:51:29.626588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.626984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.626995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.725 [2024-10-01 16:51:29.627223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.725 [2024-10-01 16:51:29.627228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.627960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.627974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.726 [2024-10-01 16:51:29.635341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.726 [2024-10-01 16:51:29.635358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.726 [2024-10-01 16:51:29.635378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.726 [2024-10-01 16:51:29.635394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.726 [2024-10-01 16:51:29.635410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.726 [2024-10-01 16:51:29.635420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.635426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.635442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.635458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.635959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.635983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.635993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.635999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.727 [2024-10-01 16:51:29.636243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.727 [2024-10-01 16:51:29.636415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.727 [2024-10-01 16:51:29.636420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.636988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.636999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.637004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.637015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.637020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.637030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.637046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.728 [2024-10-01 16:51:29.637051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.728 [2024-10-01 16:51:29.637062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.637353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.637358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.729 [2024-10-01 16:51:29.638364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.729 [2024-10-01 16:51:29.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.638562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.638988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.638994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.730 [2024-10-01 16:51:29.639010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.730 [2024-10-01 16:51:29.639245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.730 [2024-10-01 16:51:29.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.731 [2024-10-01 16:51:29.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.639271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.731 [2024-10-01 16:51:29.639277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.639289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.731 [2024-10-01 16:51:29.639294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.639305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.639310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.639321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.639327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.639337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.639343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.640993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.640999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.731 [2024-10-01 16:51:29.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.731 [2024-10-01 16:51:29.641189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.641689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.641695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.732 [2024-10-01 16:51:29.642189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.732 [2024-10-01 16:51:29.642200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.642398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.642408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.646949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.646991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.646999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.733 [2024-10-01 16:51:29.647692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.733 [2024-10-01 16:51:29.647869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.733 [2024-10-01 16:51:29.647880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.734 [2024-10-01 16:51:29.647966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.647987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.647998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.734 [2024-10-01 16:51:29.648681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.734 [2024-10-01 16:51:29.648692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.648985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.648990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.649991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.649996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.735 [2024-10-01 16:51:29.650157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.735 [2024-10-01 16:51:29.650174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.735 [2024-10-01 16:51:29.650190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.735 [2024-10-01 16:51:29.650206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.735 [2024-10-01 16:51:29.650223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.735 [2024-10-01 16:51:29.650233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.735 [2024-10-01 16:51:29.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.650726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.650989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.650999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.736 [2024-10-01 16:51:29.651004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.651017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.651022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.736 [2024-10-01 16:51:29.652768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.736 [2024-10-01 16:51:29.652778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.652992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.652998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.653986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.737 [2024-10-01 16:51:29.653992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.737 [2024-10-01 16:51:29.654002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.654817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.654984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.654994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.738 [2024-10-01 16:51:29.655097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.655108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.655114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.656395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.656413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.656445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.738 [2024-10-01 16:51:29.656461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.738 [2024-10-01 16:51:29.656472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.656991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.656996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.739 [2024-10-01 16:51:29.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.739 [2024-10-01 16:51:29.657485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.657982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.657988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.740 [2024-10-01 16:51:29.658539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.740 [2024-10-01 16:51:29.658972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.740 [2024-10-01 16:51:29.658978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.658989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.658994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.741 [2024-10-01 16:51:29.659256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.659267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.659272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.660862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.660868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.741 [2024-10-01 16:51:29.661341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.741 [2024-10-01 16:51:29.661347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.661759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.661765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.742 [2024-10-01 16:51:29.662614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.742 [2024-10-01 16:51:29.662619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.662635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.662749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.662765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.662776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.662781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.743 [2024-10-01 16:51:29.663467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.663478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.663484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.664989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.664999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.743 [2024-10-01 16:51:29.665286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.743 [2024-10-01 16:51:29.665297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.665984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.665989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.744 [2024-10-01 16:51:29.666503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.744 [2024-10-01 16:51:29.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.666868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.666988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.666999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.745 [2024-10-01 16:51:29.667424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.745 [2024-10-01 16:51:29.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:54.745 [2024-10-01 16:51:29.667729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.667735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.669984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.669996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.670015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.670035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.670053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.670071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:54.746 [2024-10-01 16:51:29.670089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.746 [2024-10-01 16:51:29.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.670953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.670993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.670999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.671019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.671040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.671061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.747 [2024-10-01 16:51:29.671105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.671125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:29.671141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:29.671146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:54.747 10703.54 IOPS, 41.81 MiB/s 9939.00 IOPS, 38.82 MiB/s 9276.40 IOPS, 36.24 MiB/s 8773.31 IOPS, 34.27 MiB/s 9000.94 IOPS, 35.16 MiB/s 9205.89 IOPS, 35.96 MiB/s 9503.00 IOPS, 37.12 MiB/s 9922.95 IOPS, 38.76 MiB/s 10303.38 IOPS, 40.25 MiB/s 10472.73 IOPS, 40.91 MiB/s 10562.39 IOPS, 41.26 MiB/s 10643.42 IOPS, 41.58 MiB/s 10888.64 IOPS, 42.53 MiB/s 11157.46 IOPS, 43.58 MiB/s [2024-10-01 16:51:43.648654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:43.648692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.747 [2024-10-01 16:51:43.648725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.747 [2024-10-01 16:51:43.648732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.648748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.648764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.748 [2024-10-01 16:51:43.648893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.748 [2024-10-01 16:51:43.648909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.748 [2024-10-01 16:51:43.648926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.748 [2024-10-01 16:51:43.648948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.648964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.648985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.748 [2024-10-01 16:51:43.649002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:54.748 [2024-10-01 16:51:43.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.748 [2024-10-01 16:51:43.650322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:54.748 11366.78 IOPS, 44.40 MiB/s 11330.79 IOPS, 44.26 MiB/s 11292.83 IOPS, 44.11 MiB/s Received shutdown signal, test time was about 29.209408 seconds 00:26:54.748 00:26:54.748 Latency(us) 00:26:54.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.748 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:54.748 Verification LBA range: start 0x0 length 0x4000 00:26:54.748 Nvme0n1 : 29.21 11269.32 44.02 0.00 0.00 11322.31 601.80 3084426.63 00:26:54.748 =================================================================================================================== 00:26:54.748 Total : 11269.32 44.02 0.00 0.00 11322.31 601.80 3084426.63 00:26:54.748 [2024-10-01 16:51:46.012369] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.748 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.748 rmmod nvme_tcp 00:26:55.009 rmmod nvme_fabrics 00:26:55.009 rmmod nvme_keyring 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2809459 ']' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2809459 ']' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2809459' 00:26:55.009 killing process with pid 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2809459 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.009 16:51:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.550 00:26:57.550 real 0m43.138s 00:26:57.550 user 1m54.657s 00:26:57.550 sys 0m11.706s 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:57.550 ************************************ 00:26:57.550 END TEST nvmf_host_multipath_status 00:26:57.550 ************************************ 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.550 ************************************ 00:26:57.550 START TEST nvmf_discovery_remove_ifc 00:26:57.550 ************************************ 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:57.550 * Looking for test storage... 00:26:57.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:57.550 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.551 --rc genhtml_branch_coverage=1 00:26:57.551 --rc genhtml_function_coverage=1 00:26:57.551 --rc genhtml_legend=1 00:26:57.551 --rc geninfo_all_blocks=1 00:26:57.551 --rc geninfo_unexecuted_blocks=1 00:26:57.551 00:26:57.551 ' 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.551 --rc genhtml_branch_coverage=1 00:26:57.551 --rc genhtml_function_coverage=1 00:26:57.551 --rc genhtml_legend=1 00:26:57.551 --rc geninfo_all_blocks=1 00:26:57.551 --rc geninfo_unexecuted_blocks=1 00:26:57.551 00:26:57.551 ' 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.551 --rc genhtml_branch_coverage=1 00:26:57.551 --rc genhtml_function_coverage=1 00:26:57.551 --rc genhtml_legend=1 00:26:57.551 --rc geninfo_all_blocks=1 00:26:57.551 --rc geninfo_unexecuted_blocks=1 00:26:57.551 00:26:57.551 ' 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.551 --rc genhtml_branch_coverage=1 00:26:57.551 --rc genhtml_function_coverage=1 00:26:57.551 --rc genhtml_legend=1 00:26:57.551 --rc geninfo_all_blocks=1 00:26:57.551 --rc geninfo_unexecuted_blocks=1 00:26:57.551 00:26:57.551 ' 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.551 16:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.551 16:51:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.137 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.137 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.137 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.137 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:04.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:04.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:04.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:04.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.138 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.399 16:51:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:27:04.399 00:27:04.399 --- 10.0.0.2 ping statistics --- 00:27:04.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.399 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:27:04.399 00:27:04.399 --- 10.0.0.1 ping statistics --- 00:27:04.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.399 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2819176 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2819176 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2819176 ']' 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.399 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.659 [2024-10-01 16:51:56.125264] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:27:04.659 [2024-10-01 16:51:56.125352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.659 [2024-10-01 16:51:56.188037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.659 [2024-10-01 16:51:56.252580] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.659 [2024-10-01 16:51:56.252615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.659 [2024-10-01 16:51:56.252621] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.659 [2024-10-01 16:51:56.252626] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.659 [2024-10-01 16:51:56.252631] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.659 [2024-10-01 16:51:56.252647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.659 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.659 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:04.659 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:04.659 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.659 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 [2024-10-01 16:51:56.392037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.919 [2024-10-01 16:51:56.400191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:04.919 null0 00:27:04.919 [2024-10-01 16:51:56.432196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2819264 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2819264 /tmp/host.sock 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2819264 ']' 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:04.919 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.919 16:51:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.919 [2024-10-01 16:51:56.517249] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:27:04.919 [2024-10-01 16:51:56.517294] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819264 ] 00:27:04.919 [2024-10-01 16:51:56.592095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.179 [2024-10-01 16:51:56.653696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.749 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.009 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.009 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:06.009 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.009 16:51:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.949 [2024-10-01 16:51:58.490401] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.949 [2024-10-01 16:51:58.490423] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.949 [2024-10-01 16:51:58.490436] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.949 [2024-10-01 16:51:58.617839] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:07.208 [2024-10-01 16:51:58.804306] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.208 [2024-10-01 16:51:58.804356] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.208 [2024-10-01 16:51:58.804377] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.208 [2024-10-01 16:51:58.804393] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:07.208 [2024-10-01 16:51:58.804412] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.209 [2024-10-01 16:51:58.810224] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e792d0 was disconnected and freed. delete nvme_qpair. 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:07.209 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.470 16:51:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.470 16:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:07.470 16:51:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.410 16:52:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.790 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.791 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.791 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.791 16:52:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:10.730 16:52:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.668 16:52:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.608 [2024-10-01 16:52:04.245316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:12.608 [2024-10-01 16:52:04.245361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.608 [2024-10-01 16:52:04.245374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.608 [2024-10-01 16:52:04.245384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.608 [2024-10-01 16:52:04.245396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.608 [2024-10-01 16:52:04.245404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.608 [2024-10-01 16:52:04.245411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.608 [2024-10-01 16:52:04.245419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.608 [2024-10-01 16:52:04.245426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.608 [2024-10-01 16:52:04.245434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.608 [2024-10-01 16:52:04.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.608 [2024-10-01 16:52:04.245448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55c40 is same with the state(6) to be set 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.608 [2024-10-01 16:52:04.255337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55c40 (9): Bad file descriptor 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.608 16:52:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:12.608 [2024-10-01 16:52:04.265376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.001 [2024-10-01 16:52:05.324088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:14.002 [2024-10-01 16:52:05.324178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e55c40 with addr=10.0.0.2, port=4420 00:27:14.002 [2024-10-01 16:52:05.324211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e55c40 is same with the state(6) to be set 00:27:14.002 [2024-10-01 16:52:05.324267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55c40 (9): Bad file descriptor 00:27:14.002 [2024-10-01 16:52:05.324406] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:14.002 [2024-10-01 16:52:05.324464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:14.002 [2024-10-01 16:52:05.324486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:14.002 [2024-10-01 16:52:05.324510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:14.002 [2024-10-01 16:52:05.324554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.002 [2024-10-01 16:52:05.324577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:14.002 16:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.002 16:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:14.002 16:52:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.048 [2024-10-01 16:52:06.326988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:15.048 [2024-10-01 16:52:06.327007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:15.048 [2024-10-01 16:52:06.327014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:15.048 [2024-10-01 16:52:06.327021] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:15.048 [2024-10-01 16:52:06.327032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.048 [2024-10-01 16:52:06.327051] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:15.048 [2024-10-01 16:52:06.327072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.048 [2024-10-01 16:52:06.327081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.048 [2024-10-01 16:52:06.327091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.048 [2024-10-01 16:52:06.327098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.048 [2024-10-01 16:52:06.327106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.048 [2024-10-01 16:52:06.327113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.048 [2024-10-01 16:52:06.327121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.048 [2024-10-01 16:52:06.327127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.048 [2024-10-01 16:52:06.327135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.048 [2024-10-01 16:52:06.327142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.048 [2024-10-01 16:52:06.327150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:15.048 [2024-10-01 16:52:06.327579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e45380 (9): Bad file descriptor 00:27:15.048 [2024-10-01 16:52:06.328591] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:15.048 [2024-10-01 16:52:06.328602] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.048 16:52:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.983 16:52:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.917 [2024-10-01 16:52:08.385840] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.917 [2024-10-01 16:52:08.385855] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.917 [2024-10-01 16:52:08.385867] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.917 [2024-10-01 16:52:08.515283] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:16.917 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:17.178 16:52:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:17.178 [2024-10-01 16:52:08.737442] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:17.178 [2024-10-01 16:52:08.737483] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:17.178 [2024-10-01 16:52:08.737503] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:17.178 [2024-10-01 16:52:08.737516] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:17.178 [2024-10-01 16:52:08.737523] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:17.178 [2024-10-01 16:52:08.784638] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e60140 was disconnected and freed. delete nvme_qpair. 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2819264 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2819264 ']' 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2819264 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2819264 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2819264' 00:27:18.117 killing process with pid 2819264 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2819264 00:27:18.117 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2819264 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.377 rmmod nvme_tcp 00:27:18.377 rmmod nvme_fabrics 00:27:18.377 rmmod nvme_keyring 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2819176 ']' 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2819176 00:27:18.377 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2819176 ']' 00:27:18.378 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2819176 00:27:18.378 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:18.378 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.378 16:52:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2819176 00:27:18.378 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.378 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.378 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2819176' 00:27:18.378 killing process with pid 2819176 00:27:18.378 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2819176 00:27:18.378 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2819176 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.638 16:52:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.547 16:52:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.547 00:27:20.547 real 0m23.414s 00:27:20.547 user 0m28.820s 00:27:20.547 sys 0m6.652s 00:27:20.547 16:52:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.547 16:52:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.547 ************************************ 00:27:20.547 END TEST nvmf_discovery_remove_ifc 00:27:20.547 ************************************ 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.807 ************************************ 00:27:20.807 START TEST nvmf_identify_kernel_target 00:27:20.807 ************************************ 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:20.807 * Looking for test storage... 00:27:20.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.807 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.808 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:21.068 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:21.068 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.068 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.069 --rc genhtml_branch_coverage=1 00:27:21.069 --rc genhtml_function_coverage=1 00:27:21.069 --rc genhtml_legend=1 00:27:21.069 --rc geninfo_all_blocks=1 00:27:21.069 --rc geninfo_unexecuted_blocks=1 00:27:21.069 00:27:21.069 ' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.069 --rc genhtml_branch_coverage=1 00:27:21.069 --rc genhtml_function_coverage=1 00:27:21.069 --rc genhtml_legend=1 00:27:21.069 --rc geninfo_all_blocks=1 00:27:21.069 --rc geninfo_unexecuted_blocks=1 00:27:21.069 00:27:21.069 ' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.069 --rc genhtml_branch_coverage=1 00:27:21.069 --rc genhtml_function_coverage=1 00:27:21.069 --rc genhtml_legend=1 00:27:21.069 --rc geninfo_all_blocks=1 00:27:21.069 --rc geninfo_unexecuted_blocks=1 00:27:21.069 00:27:21.069 ' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.069 --rc genhtml_branch_coverage=1 00:27:21.069 --rc genhtml_function_coverage=1 00:27:21.069 --rc genhtml_legend=1 00:27:21.069 --rc geninfo_all_blocks=1 00:27:21.069 --rc geninfo_unexecuted_blocks=1 00:27:21.069 00:27:21.069 ' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.069 16:52:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:29.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:29.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:29.204 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:29.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:29.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.205 16:52:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:29.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:27:29.205 00:27:29.205 --- 10.0.0.2 ping statistics --- 00:27:29.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.205 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:27:29.205 00:27:29.205 --- 10.0.0.1 ping statistics --- 00:27:29.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.205 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:29.205 16:52:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:31.769 Waiting for block devices as requested 00:27:32.029 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:32.029 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:32.029 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:32.288 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:32.288 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:32.288 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:32.547 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:32.547 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:32.547 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:27:32.807 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:32.807 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:32.807 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:33.068 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:33.068 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:33.068 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:33.328 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:33.328 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:33.589 No valid GPT data, bailing 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.589 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:27:33.850 00:27:33.850 Discovery Log Number of Records 2, Generation counter 2 00:27:33.850 =====Discovery Log Entry 0====== 00:27:33.850 trtype: tcp 00:27:33.850 adrfam: ipv4 00:27:33.850 subtype: current discovery subsystem 00:27:33.850 treq: not specified, sq flow control disable supported 00:27:33.850 portid: 1 00:27:33.850 trsvcid: 4420 00:27:33.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:33.850 traddr: 10.0.0.1 00:27:33.850 eflags: none 00:27:33.850 sectype: none 00:27:33.850 =====Discovery Log Entry 1====== 00:27:33.850 trtype: tcp 00:27:33.850 adrfam: ipv4 00:27:33.850 subtype: nvme subsystem 00:27:33.850 treq: not specified, sq flow control disable supported 00:27:33.850 portid: 1 00:27:33.850 trsvcid: 4420 00:27:33.850 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:33.850 traddr: 10.0.0.1 00:27:33.850 eflags: none 00:27:33.850 sectype: none 00:27:33.850 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:33.850 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:33.850 ===================================================== 00:27:33.850 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:33.850 ===================================================== 00:27:33.850 Controller Capabilities/Features 00:27:33.850 ================================ 00:27:33.850 Vendor ID: 0000 00:27:33.850 Subsystem Vendor ID: 0000 00:27:33.850 Serial Number: 124498c621da2794c834 00:27:33.850 Model Number: Linux 00:27:33.850 Firmware Version: 6.8.9-20 00:27:33.850 Recommended Arb Burst: 0 00:27:33.850 IEEE OUI Identifier: 00 00 00 00:27:33.850 Multi-path I/O 00:27:33.850 May have multiple subsystem ports: No 00:27:33.850 May have multiple controllers: No 00:27:33.850 Associated with SR-IOV VF: No 00:27:33.850 Max Data Transfer Size: Unlimited 00:27:33.850 Max Number of Namespaces: 0 00:27:33.850 Max Number of I/O Queues: 1024 00:27:33.850 NVMe Specification Version (VS): 1.3 00:27:33.850 NVMe Specification Version (Identify): 1.3 00:27:33.850 Maximum Queue Entries: 1024 00:27:33.850 Contiguous Queues Required: No 00:27:33.850 Arbitration Mechanisms Supported 00:27:33.850 Weighted Round Robin: Not Supported 00:27:33.850 Vendor Specific: Not Supported 00:27:33.850 Reset Timeout: 7500 ms 00:27:33.850 Doorbell Stride: 4 bytes 00:27:33.850 NVM Subsystem Reset: Not Supported 00:27:33.850 Command Sets Supported 00:27:33.850 NVM Command Set: Supported 00:27:33.850 Boot Partition: Not Supported 00:27:33.850 Memory Page Size Minimum: 4096 bytes 00:27:33.850 Memory Page Size Maximum: 4096 bytes 00:27:33.850 Persistent Memory Region: Not Supported 00:27:33.850 Optional Asynchronous Events Supported 00:27:33.850 Namespace Attribute Notices: Not Supported 00:27:33.850 Firmware Activation Notices: Not Supported 00:27:33.850 ANA Change Notices: Not Supported 00:27:33.850 PLE Aggregate Log Change Notices: Not Supported 00:27:33.850 LBA Status Info Alert Notices: Not Supported 00:27:33.850 EGE Aggregate Log Change Notices: Not Supported 00:27:33.850 Normal NVM Subsystem Shutdown event: Not Supported 00:27:33.850 Zone Descriptor Change Notices: Not Supported 00:27:33.850 Discovery Log Change Notices: Supported 00:27:33.850 Controller Attributes 00:27:33.850 128-bit Host Identifier: Not Supported 00:27:33.850 Non-Operational Permissive Mode: Not Supported 00:27:33.850 NVM Sets: Not Supported 00:27:33.850 Read Recovery Levels: Not Supported 00:27:33.850 Endurance Groups: Not Supported 00:27:33.850 Predictable Latency Mode: Not Supported 00:27:33.850 Traffic Based Keep ALive: Not Supported 00:27:33.851 Namespace Granularity: Not Supported 00:27:33.851 SQ Associations: Not Supported 00:27:33.851 UUID List: Not Supported 00:27:33.851 Multi-Domain Subsystem: Not Supported 00:27:33.851 Fixed Capacity Management: Not Supported 00:27:33.851 Variable Capacity Management: Not Supported 00:27:33.851 Delete Endurance Group: Not Supported 00:27:33.851 Delete NVM Set: Not Supported 00:27:33.851 Extended LBA Formats Supported: Not Supported 00:27:33.851 Flexible Data Placement Supported: Not Supported 00:27:33.851 00:27:33.851 Controller Memory Buffer Support 00:27:33.851 ================================ 00:27:33.851 Supported: No 00:27:33.851 00:27:33.851 Persistent Memory Region Support 00:27:33.851 ================================ 00:27:33.851 Supported: No 00:27:33.851 00:27:33.851 Admin Command Set Attributes 00:27:33.851 ============================ 00:27:33.851 Security Send/Receive: Not Supported 00:27:33.851 Format NVM: Not Supported 00:27:33.851 Firmware Activate/Download: Not Supported 00:27:33.851 Namespace Management: Not Supported 00:27:33.851 Device Self-Test: Not Supported 00:27:33.851 Directives: Not Supported 00:27:33.851 NVMe-MI: Not Supported 00:27:33.851 Virtualization Management: Not Supported 00:27:33.851 Doorbell Buffer Config: Not Supported 00:27:33.851 Get LBA Status Capability: Not Supported 00:27:33.851 Command & Feature Lockdown Capability: Not Supported 00:27:33.851 Abort Command Limit: 1 00:27:33.851 Async Event Request Limit: 1 00:27:33.851 Number of Firmware Slots: N/A 00:27:33.851 Firmware Slot 1 Read-Only: N/A 00:27:33.851 Firmware Activation Without Reset: N/A 00:27:33.851 Multiple Update Detection Support: N/A 00:27:33.851 Firmware Update Granularity: No Information Provided 00:27:33.851 Per-Namespace SMART Log: No 00:27:33.851 Asymmetric Namespace Access Log Page: Not Supported 00:27:33.851 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:33.851 Command Effects Log Page: Not Supported 00:27:33.851 Get Log Page Extended Data: Supported 00:27:33.851 Telemetry Log Pages: Not Supported 00:27:33.851 Persistent Event Log Pages: Not Supported 00:27:33.851 Supported Log Pages Log Page: May Support 00:27:33.851 Commands Supported & Effects Log Page: Not Supported 00:27:33.851 Feature Identifiers & Effects Log Page:May Support 00:27:33.851 NVMe-MI Commands & Effects Log Page: May Support 00:27:33.851 Data Area 4 for Telemetry Log: Not Supported 00:27:33.851 Error Log Page Entries Supported: 1 00:27:33.851 Keep Alive: Not Supported 00:27:33.851 00:27:33.851 NVM Command Set Attributes 00:27:33.851 ========================== 00:27:33.851 Submission Queue Entry Size 00:27:33.851 Max: 1 00:27:33.851 Min: 1 00:27:33.851 Completion Queue Entry Size 00:27:33.851 Max: 1 00:27:33.851 Min: 1 00:27:33.851 Number of Namespaces: 0 00:27:33.851 Compare Command: Not Supported 00:27:33.851 Write Uncorrectable Command: Not Supported 00:27:33.851 Dataset Management Command: Not Supported 00:27:33.851 Write Zeroes Command: Not Supported 00:27:33.851 Set Features Save Field: Not Supported 00:27:33.851 Reservations: Not Supported 00:27:33.851 Timestamp: Not Supported 00:27:33.851 Copy: Not Supported 00:27:33.851 Volatile Write Cache: Not Present 00:27:33.851 Atomic Write Unit (Normal): 1 00:27:33.851 Atomic Write Unit (PFail): 1 00:27:33.851 Atomic Compare & Write Unit: 1 00:27:33.851 Fused Compare & Write: Not Supported 00:27:33.851 Scatter-Gather List 00:27:33.851 SGL Command Set: Supported 00:27:33.851 SGL Keyed: Not Supported 00:27:33.851 SGL Bit Bucket Descriptor: Not Supported 00:27:33.851 SGL Metadata Pointer: Not Supported 00:27:33.851 Oversized SGL: Not Supported 00:27:33.851 SGL Metadata Address: Not Supported 00:27:33.851 SGL Offset: Supported 00:27:33.851 Transport SGL Data Block: Not Supported 00:27:33.851 Replay Protected Memory Block: Not Supported 00:27:33.851 00:27:33.851 Firmware Slot Information 00:27:33.851 ========================= 00:27:33.851 Active slot: 0 00:27:33.851 00:27:33.851 00:27:33.851 Error Log 00:27:33.851 ========= 00:27:33.851 00:27:33.851 Active Namespaces 00:27:33.851 ================= 00:27:33.851 Discovery Log Page 00:27:33.851 ================== 00:27:33.851 Generation Counter: 2 00:27:33.851 Number of Records: 2 00:27:33.851 Record Format: 0 00:27:33.851 00:27:33.851 Discovery Log Entry 0 00:27:33.851 ---------------------- 00:27:33.851 Transport Type: 3 (TCP) 00:27:33.851 Address Family: 1 (IPv4) 00:27:33.851 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:33.851 Entry Flags: 00:27:33.851 Duplicate Returned Information: 0 00:27:33.851 Explicit Persistent Connection Support for Discovery: 0 00:27:33.851 Transport Requirements: 00:27:33.851 Secure Channel: Not Specified 00:27:33.851 Port ID: 1 (0x0001) 00:27:33.851 Controller ID: 65535 (0xffff) 00:27:33.851 Admin Max SQ Size: 32 00:27:33.851 Transport Service Identifier: 4420 00:27:33.851 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:33.851 Transport Address: 10.0.0.1 00:27:33.851 Discovery Log Entry 1 00:27:33.851 ---------------------- 00:27:33.851 Transport Type: 3 (TCP) 00:27:33.851 Address Family: 1 (IPv4) 00:27:33.851 Subsystem Type: 2 (NVM Subsystem) 00:27:33.851 Entry Flags: 00:27:33.851 Duplicate Returned Information: 0 00:27:33.851 Explicit Persistent Connection Support for Discovery: 0 00:27:33.851 Transport Requirements: 00:27:33.851 Secure Channel: Not Specified 00:27:33.851 Port ID: 1 (0x0001) 00:27:33.851 Controller ID: 65535 (0xffff) 00:27:33.851 Admin Max SQ Size: 32 00:27:33.851 Transport Service Identifier: 4420 00:27:33.851 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:33.851 Transport Address: 10.0.0.1 00:27:33.851 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:34.112 get_feature(0x01) failed 00:27:34.112 get_feature(0x02) failed 00:27:34.112 get_feature(0x04) failed 00:27:34.112 ===================================================== 00:27:34.112 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:34.112 ===================================================== 00:27:34.112 Controller Capabilities/Features 00:27:34.112 ================================ 00:27:34.112 Vendor ID: 0000 00:27:34.112 Subsystem Vendor ID: 0000 00:27:34.112 Serial Number: cfbab1729afbca8ea5c1 00:27:34.112 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:34.112 Firmware Version: 6.8.9-20 00:27:34.112 Recommended Arb Burst: 6 00:27:34.112 IEEE OUI Identifier: 00 00 00 00:27:34.112 Multi-path I/O 00:27:34.112 May have multiple subsystem ports: Yes 00:27:34.112 May have multiple controllers: Yes 00:27:34.112 Associated with SR-IOV VF: No 00:27:34.112 Max Data Transfer Size: Unlimited 00:27:34.112 Max Number of Namespaces: 1024 00:27:34.112 Max Number of I/O Queues: 128 00:27:34.112 NVMe Specification Version (VS): 1.3 00:27:34.112 NVMe Specification Version (Identify): 1.3 00:27:34.112 Maximum Queue Entries: 1024 00:27:34.112 Contiguous Queues Required: No 00:27:34.112 Arbitration Mechanisms Supported 00:27:34.112 Weighted Round Robin: Not Supported 00:27:34.112 Vendor Specific: Not Supported 00:27:34.112 Reset Timeout: 7500 ms 00:27:34.112 Doorbell Stride: 4 bytes 00:27:34.112 NVM Subsystem Reset: Not Supported 00:27:34.112 Command Sets Supported 00:27:34.112 NVM Command Set: Supported 00:27:34.112 Boot Partition: Not Supported 00:27:34.112 Memory Page Size Minimum: 4096 bytes 00:27:34.112 Memory Page Size Maximum: 4096 bytes 00:27:34.112 Persistent Memory Region: Not Supported 00:27:34.112 Optional Asynchronous Events Supported 00:27:34.112 Namespace Attribute Notices: Supported 00:27:34.112 Firmware Activation Notices: Not Supported 00:27:34.112 ANA Change Notices: Supported 00:27:34.112 PLE Aggregate Log Change Notices: Not Supported 00:27:34.112 LBA Status Info Alert Notices: Not Supported 00:27:34.112 EGE Aggregate Log Change Notices: Not Supported 00:27:34.112 Normal NVM Subsystem Shutdown event: Not Supported 00:27:34.112 Zone Descriptor Change Notices: Not Supported 00:27:34.112 Discovery Log Change Notices: Not Supported 00:27:34.112 Controller Attributes 00:27:34.112 128-bit Host Identifier: Supported 00:27:34.112 Non-Operational Permissive Mode: Not Supported 00:27:34.112 NVM Sets: Not Supported 00:27:34.112 Read Recovery Levels: Not Supported 00:27:34.112 Endurance Groups: Not Supported 00:27:34.112 Predictable Latency Mode: Not Supported 00:27:34.112 Traffic Based Keep ALive: Supported 00:27:34.112 Namespace Granularity: Not Supported 00:27:34.112 SQ Associations: Not Supported 00:27:34.112 UUID List: Not Supported 00:27:34.113 Multi-Domain Subsystem: Not Supported 00:27:34.113 Fixed Capacity Management: Not Supported 00:27:34.113 Variable Capacity Management: Not Supported 00:27:34.113 Delete Endurance Group: Not Supported 00:27:34.113 Delete NVM Set: Not Supported 00:27:34.113 Extended LBA Formats Supported: Not Supported 00:27:34.113 Flexible Data Placement Supported: Not Supported 00:27:34.113 00:27:34.113 Controller Memory Buffer Support 00:27:34.113 ================================ 00:27:34.113 Supported: No 00:27:34.113 00:27:34.113 Persistent Memory Region Support 00:27:34.113 ================================ 00:27:34.113 Supported: No 00:27:34.113 00:27:34.113 Admin Command Set Attributes 00:27:34.113 ============================ 00:27:34.113 Security Send/Receive: Not Supported 00:27:34.113 Format NVM: Not Supported 00:27:34.113 Firmware Activate/Download: Not Supported 00:27:34.113 Namespace Management: Not Supported 00:27:34.113 Device Self-Test: Not Supported 00:27:34.113 Directives: Not Supported 00:27:34.113 NVMe-MI: Not Supported 00:27:34.113 Virtualization Management: Not Supported 00:27:34.113 Doorbell Buffer Config: Not Supported 00:27:34.113 Get LBA Status Capability: Not Supported 00:27:34.113 Command & Feature Lockdown Capability: Not Supported 00:27:34.113 Abort Command Limit: 4 00:27:34.113 Async Event Request Limit: 4 00:27:34.113 Number of Firmware Slots: N/A 00:27:34.113 Firmware Slot 1 Read-Only: N/A 00:27:34.113 Firmware Activation Without Reset: N/A 00:27:34.113 Multiple Update Detection Support: N/A 00:27:34.113 Firmware Update Granularity: No Information Provided 00:27:34.113 Per-Namespace SMART Log: Yes 00:27:34.113 Asymmetric Namespace Access Log Page: Supported 00:27:34.113 ANA Transition Time : 10 sec 00:27:34.113 00:27:34.113 Asymmetric Namespace Access Capabilities 00:27:34.113 ANA Optimized State : Supported 00:27:34.113 ANA Non-Optimized State : Supported 00:27:34.113 ANA Inaccessible State : Supported 00:27:34.113 ANA Persistent Loss State : Supported 00:27:34.113 ANA Change State : Supported 00:27:34.113 ANAGRPID is not changed : No 00:27:34.113 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:34.113 00:27:34.113 ANA Group Identifier Maximum : 128 00:27:34.113 Number of ANA Group Identifiers : 128 00:27:34.113 Max Number of Allowed Namespaces : 1024 00:27:34.113 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:34.113 Command Effects Log Page: Supported 00:27:34.113 Get Log Page Extended Data: Supported 00:27:34.113 Telemetry Log Pages: Not Supported 00:27:34.113 Persistent Event Log Pages: Not Supported 00:27:34.113 Supported Log Pages Log Page: May Support 00:27:34.113 Commands Supported & Effects Log Page: Not Supported 00:27:34.113 Feature Identifiers & Effects Log Page:May Support 00:27:34.113 NVMe-MI Commands & Effects Log Page: May Support 00:27:34.113 Data Area 4 for Telemetry Log: Not Supported 00:27:34.113 Error Log Page Entries Supported: 128 00:27:34.113 Keep Alive: Supported 00:27:34.113 Keep Alive Granularity: 1000 ms 00:27:34.113 00:27:34.113 NVM Command Set Attributes 00:27:34.113 ========================== 00:27:34.113 Submission Queue Entry Size 00:27:34.113 Max: 64 00:27:34.113 Min: 64 00:27:34.113 Completion Queue Entry Size 00:27:34.113 Max: 16 00:27:34.113 Min: 16 00:27:34.113 Number of Namespaces: 1024 00:27:34.113 Compare Command: Not Supported 00:27:34.113 Write Uncorrectable Command: Not Supported 00:27:34.113 Dataset Management Command: Supported 00:27:34.113 Write Zeroes Command: Supported 00:27:34.113 Set Features Save Field: Not Supported 00:27:34.113 Reservations: Not Supported 00:27:34.113 Timestamp: Not Supported 00:27:34.113 Copy: Not Supported 00:27:34.113 Volatile Write Cache: Present 00:27:34.113 Atomic Write Unit (Normal): 1 00:27:34.113 Atomic Write Unit (PFail): 1 00:27:34.113 Atomic Compare & Write Unit: 1 00:27:34.113 Fused Compare & Write: Not Supported 00:27:34.113 Scatter-Gather List 00:27:34.113 SGL Command Set: Supported 00:27:34.113 SGL Keyed: Not Supported 00:27:34.113 SGL Bit Bucket Descriptor: Not Supported 00:27:34.113 SGL Metadata Pointer: Not Supported 00:27:34.113 Oversized SGL: Not Supported 00:27:34.113 SGL Metadata Address: Not Supported 00:27:34.113 SGL Offset: Supported 00:27:34.113 Transport SGL Data Block: Not Supported 00:27:34.113 Replay Protected Memory Block: Not Supported 00:27:34.113 00:27:34.113 Firmware Slot Information 00:27:34.113 ========================= 00:27:34.113 Active slot: 0 00:27:34.113 00:27:34.113 Asymmetric Namespace Access 00:27:34.113 =========================== 00:27:34.113 Change Count : 0 00:27:34.113 Number of ANA Group Descriptors : 1 00:27:34.113 ANA Group Descriptor : 0 00:27:34.113 ANA Group ID : 1 00:27:34.113 Number of NSID Values : 1 00:27:34.113 Change Count : 0 00:27:34.113 ANA State : 1 00:27:34.113 Namespace Identifier : 1 00:27:34.113 00:27:34.113 Commands Supported and Effects 00:27:34.113 ============================== 00:27:34.113 Admin Commands 00:27:34.113 -------------- 00:27:34.113 Get Log Page (02h): Supported 00:27:34.113 Identify (06h): Supported 00:27:34.113 Abort (08h): Supported 00:27:34.113 Set Features (09h): Supported 00:27:34.113 Get Features (0Ah): Supported 00:27:34.113 Asynchronous Event Request (0Ch): Supported 00:27:34.113 Keep Alive (18h): Supported 00:27:34.113 I/O Commands 00:27:34.113 ------------ 00:27:34.113 Flush (00h): Supported 00:27:34.113 Write (01h): Supported LBA-Change 00:27:34.113 Read (02h): Supported 00:27:34.113 Write Zeroes (08h): Supported LBA-Change 00:27:34.113 Dataset Management (09h): Supported 00:27:34.113 00:27:34.113 Error Log 00:27:34.113 ========= 00:27:34.113 Entry: 0 00:27:34.113 Error Count: 0x3 00:27:34.113 Submission Queue Id: 0x0 00:27:34.113 Command Id: 0x5 00:27:34.113 Phase Bit: 0 00:27:34.113 Status Code: 0x2 00:27:34.113 Status Code Type: 0x0 00:27:34.113 Do Not Retry: 1 00:27:34.113 Error Location: 0x28 00:27:34.113 LBA: 0x0 00:27:34.113 Namespace: 0x0 00:27:34.113 Vendor Log Page: 0x0 00:27:34.113 ----------- 00:27:34.113 Entry: 1 00:27:34.113 Error Count: 0x2 00:27:34.113 Submission Queue Id: 0x0 00:27:34.113 Command Id: 0x5 00:27:34.113 Phase Bit: 0 00:27:34.113 Status Code: 0x2 00:27:34.113 Status Code Type: 0x0 00:27:34.113 Do Not Retry: 1 00:27:34.113 Error Location: 0x28 00:27:34.113 LBA: 0x0 00:27:34.113 Namespace: 0x0 00:27:34.113 Vendor Log Page: 0x0 00:27:34.113 ----------- 00:27:34.113 Entry: 2 00:27:34.113 Error Count: 0x1 00:27:34.113 Submission Queue Id: 0x0 00:27:34.113 Command Id: 0x4 00:27:34.113 Phase Bit: 0 00:27:34.113 Status Code: 0x2 00:27:34.113 Status Code Type: 0x0 00:27:34.113 Do Not Retry: 1 00:27:34.113 Error Location: 0x28 00:27:34.113 LBA: 0x0 00:27:34.113 Namespace: 0x0 00:27:34.113 Vendor Log Page: 0x0 00:27:34.113 00:27:34.113 Number of Queues 00:27:34.113 ================ 00:27:34.113 Number of I/O Submission Queues: 128 00:27:34.113 Number of I/O Completion Queues: 128 00:27:34.113 00:27:34.113 ZNS Specific Controller Data 00:27:34.113 ============================ 00:27:34.113 Zone Append Size Limit: 0 00:27:34.113 00:27:34.113 00:27:34.113 Active Namespaces 00:27:34.113 ================= 00:27:34.113 get_feature(0x05) failed 00:27:34.113 Namespace ID:1 00:27:34.113 Command Set Identifier: NVM (00h) 00:27:34.113 Deallocate: Supported 00:27:34.113 Deallocated/Unwritten Error: Not Supported 00:27:34.113 Deallocated Read Value: Unknown 00:27:34.113 Deallocate in Write Zeroes: Not Supported 00:27:34.113 Deallocated Guard Field: 0xFFFF 00:27:34.113 Flush: Supported 00:27:34.113 Reservation: Not Supported 00:27:34.113 Namespace Sharing Capabilities: Multiple Controllers 00:27:34.113 Size (in LBAs): 3907029168 (1863GiB) 00:27:34.113 Capacity (in LBAs): 3907029168 (1863GiB) 00:27:34.113 Utilization (in LBAs): 3907029168 (1863GiB) 00:27:34.113 UUID: c1f020ff-5b48-4299-99f9-e3fa98f9c037 00:27:34.113 Thin Provisioning: Not Supported 00:27:34.113 Per-NS Atomic Units: Yes 00:27:34.113 Atomic Boundary Size (Normal): 0 00:27:34.113 Atomic Boundary Size (PFail): 0 00:27:34.113 Atomic Boundary Offset: 0 00:27:34.113 NGUID/EUI64 Never Reused: No 00:27:34.113 ANA group ID: 1 00:27:34.113 Namespace Write Protected: No 00:27:34.113 Number of LBA Formats: 1 00:27:34.113 Current LBA Format: LBA Format #00 00:27:34.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:34.113 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.113 rmmod nvme_tcp 00:27:34.113 rmmod nvme_fabrics 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.113 16:52:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.026 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:36.287 16:52:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:39.587 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:39.587 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:41.496 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:27:41.756 00:27:41.756 real 0m20.967s 00:27:41.756 user 0m4.991s 00:27:41.756 sys 0m11.087s 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.757 ************************************ 00:27:41.757 END TEST nvmf_identify_kernel_target 00:27:41.757 ************************************ 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.757 ************************************ 00:27:41.757 START TEST nvmf_auth_host 00:27:41.757 ************************************ 00:27:41.757 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:42.016 * Looking for test storage... 00:27:42.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:42.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.016 --rc genhtml_branch_coverage=1 00:27:42.016 --rc genhtml_function_coverage=1 00:27:42.016 --rc genhtml_legend=1 00:27:42.016 --rc geninfo_all_blocks=1 00:27:42.016 --rc geninfo_unexecuted_blocks=1 00:27:42.016 00:27:42.016 ' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:42.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.016 --rc genhtml_branch_coverage=1 00:27:42.016 --rc genhtml_function_coverage=1 00:27:42.016 --rc genhtml_legend=1 00:27:42.016 --rc geninfo_all_blocks=1 00:27:42.016 --rc geninfo_unexecuted_blocks=1 00:27:42.016 00:27:42.016 ' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:42.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.016 --rc genhtml_branch_coverage=1 00:27:42.016 --rc genhtml_function_coverage=1 00:27:42.016 --rc genhtml_legend=1 00:27:42.016 --rc geninfo_all_blocks=1 00:27:42.016 --rc geninfo_unexecuted_blocks=1 00:27:42.016 00:27:42.016 ' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:42.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.016 --rc genhtml_branch_coverage=1 00:27:42.016 --rc genhtml_function_coverage=1 00:27:42.016 --rc genhtml_legend=1 00:27:42.016 --rc geninfo_all_blocks=1 00:27:42.016 --rc geninfo_unexecuted_blocks=1 00:27:42.016 00:27:42.016 ' 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.016 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:42.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.017 16:52:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.593 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.593 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:48.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:48.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:48.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:48.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.594 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:48.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:27:48.855 00:27:48.855 --- 10.0.0.2 ping statistics --- 00:27:48.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.855 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:27:48.855 00:27:48.855 --- 10.0.0.1 ping statistics --- 00:27:48.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.855 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:48.855 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2832650 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2832650 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2832650 ']' 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.115 16:52:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=59e79d2e3960d6bbd4b60ff71ec43802 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.8mm 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 59e79d2e3960d6bbd4b60ff71ec43802 0 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 59e79d2e3960d6bbd4b60ff71ec43802 0 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=59e79d2e3960d6bbd4b60ff71ec43802 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.8mm 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.8mm 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8mm 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=10a708a349d7bd935dd9c67e41826d75d3861405adcd02755c77ee1ceedbbe0f 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.FOJ 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 10a708a349d7bd935dd9c67e41826d75d3861405adcd02755c77ee1ceedbbe0f 3 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 10a708a349d7bd935dd9c67e41826d75d3861405adcd02755c77ee1ceedbbe0f 3 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=10a708a349d7bd935dd9c67e41826d75d3861405adcd02755c77ee1ceedbbe0f 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.FOJ 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.FOJ 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.FOJ 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.057 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3aee616352f8bfaf52fe4d75489b8b07ba48eafa90f8e495 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.GVx 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3aee616352f8bfaf52fe4d75489b8b07ba48eafa90f8e495 0 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3aee616352f8bfaf52fe4d75489b8b07ba48eafa90f8e495 0 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3aee616352f8bfaf52fe4d75489b8b07ba48eafa90f8e495 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.GVx 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.GVx 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.GVx 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:50.058 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ca627abadbaa69d176ca0e5758c649e0cde4e867e2b81df4 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.EnN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ca627abadbaa69d176ca0e5758c649e0cde4e867e2b81df4 2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ca627abadbaa69d176ca0e5758c649e0cde4e867e2b81df4 2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ca627abadbaa69d176ca0e5758c649e0cde4e867e2b81df4 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.EnN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.EnN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.EnN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7a34ff53fab663310c3c965db733c23b 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.zlz 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7a34ff53fab663310c3c965db733c23b 1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7a34ff53fab663310c3c965db733c23b 1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7a34ff53fab663310c3c965db733c23b 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.zlz 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.zlz 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zlz 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=af290e6c6b8d1bc9e9fd62e0bb2b6401 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.oKN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key af290e6c6b8d1bc9e9fd62e0bb2b6401 1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 af290e6c6b8d1bc9e9fd62e0bb2b6401 1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=af290e6c6b8d1bc9e9fd62e0bb2b6401 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.oKN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.oKN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oKN 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5e39291fa085f43b4105d8cd4f2c5a01917216d35151977c 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.zD9 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5e39291fa085f43b4105d8cd4f2c5a01917216d35151977c 2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5e39291fa085f43b4105d8cd4f2c5a01917216d35151977c 2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5e39291fa085f43b4105d8cd4f2c5a01917216d35151977c 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.zD9 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.zD9 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zD9 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:50.319 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=125f88333765b1933d98254ab2ef6d4b 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ulT 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 125f88333765b1933d98254ab2ef6d4b 0 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 125f88333765b1933d98254ab2ef6d4b 0 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=125f88333765b1933d98254ab2ef6d4b 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:50.320 16:52:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ulT 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ulT 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ulT 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e08479075c8759ad8e4d0758f6dbfcc3f43fc343d51b8f4ad174fa1665f3a6e8 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.8sW 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e08479075c8759ad8e4d0758f6dbfcc3f43fc343d51b8f4ad174fa1665f3a6e8 3 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e08479075c8759ad8e4d0758f6dbfcc3f43fc343d51b8f4ad174fa1665f3a6e8 3 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e08479075c8759ad8e4d0758f6dbfcc3f43fc343d51b8f4ad174fa1665f3a6e8 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.8sW 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.8sW 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8sW 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2832650 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2832650 ']' 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.581 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8mm 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.FOJ ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FOJ 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.GVx 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.EnN ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EnN 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zlz 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oKN ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oKN 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zD9 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ulT ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ulT 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8sW 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:50.843 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:50.844 16:52:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:54.142 Waiting for block devices as requested 00:27:54.402 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:54.402 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:54.402 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:54.662 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:54.662 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:54.662 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:54.923 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:54.923 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:54.923 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:27:55.183 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:55.183 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:55.183 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:55.443 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:55.443 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:55.443 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:55.443 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:55.703 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:56.643 No valid GPT data, bailing 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:56.643 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:27:56.644 00:27:56.644 Discovery Log Number of Records 2, Generation counter 2 00:27:56.644 =====Discovery Log Entry 0====== 00:27:56.644 trtype: tcp 00:27:56.644 adrfam: ipv4 00:27:56.644 subtype: current discovery subsystem 00:27:56.644 treq: not specified, sq flow control disable supported 00:27:56.644 portid: 1 00:27:56.644 trsvcid: 4420 00:27:56.644 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:56.644 traddr: 10.0.0.1 00:27:56.644 eflags: none 00:27:56.644 sectype: none 00:27:56.644 =====Discovery Log Entry 1====== 00:27:56.644 trtype: tcp 00:27:56.644 adrfam: ipv4 00:27:56.644 subtype: nvme subsystem 00:27:56.644 treq: not specified, sq flow control disable supported 00:27:56.644 portid: 1 00:27:56.644 trsvcid: 4420 00:27:56.644 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:56.644 traddr: 10.0.0.1 00:27:56.644 eflags: none 00:27:56.644 sectype: none 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.644 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.905 nvme0n1 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.905 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.166 nvme0n1 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.166 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.426 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.427 nvme0n1 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.427 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.687 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.688 nvme0n1 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.688 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.949 nvme0n1 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.949 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.209 nvme0n1 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.209 16:52:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:58.470 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.471 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 nvme0n1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.732 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.994 nvme0n1 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.994 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.255 nvme0n1 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.255 16:52:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.516 nvme0n1 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.516 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.776 nvme0n1 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.776 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 16:52:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.606 nvme0n1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.606 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.177 nvme0n1 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:01.177 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.178 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.437 nvme0n1 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.437 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:01.438 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:01.438 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:01.438 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.438 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.438 16:52:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.697 nvme0n1 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.697 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.956 nvme0n1 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.956 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.216 16:52:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:03.598 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.599 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.170 nvme0n1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.170 16:52:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.740 nvme0n1 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.740 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.741 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.311 nvme0n1 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.311 16:52:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.881 nvme0n1 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.881 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.882 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.142 nvme0n1 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.142 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:06.402 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.403 16:52:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 nvme0n1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.006 16:52:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.657 nvme0n1 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.657 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.261 nvme0n1 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.261 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.520 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.521 16:52:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.090 nvme0n1 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:09.090 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.091 16:53:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 nvme0n1 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.028 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.028 nvme0n1 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.029 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 nvme0n1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.289 16:53:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.550 nvme0n1 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.550 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.551 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.812 nvme0n1 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.812 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 nvme0n1 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.072 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.332 nvme0n1 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.332 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:11.333 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:11.333 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:11.333 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.333 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.333 16:53:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.592 nvme0n1 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.592 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.593 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.853 nvme0n1 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.853 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.114 nvme0n1 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.114 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:12.115 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:12.115 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:12.115 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.115 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.115 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 nvme0n1 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.375 16:53:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 nvme0n1 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.635 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.636 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.896 nvme0n1 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.156 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.157 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.416 nvme0n1 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.416 16:53:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.417 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.678 nvme0n1 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:13.678 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:13.938 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.939 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.939 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.199 nvme0n1 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.199 16:53:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 nvme0n1 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.459 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.719 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.980 nvme0n1 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:14.980 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.981 16:53:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 nvme0n1 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:15.550 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:15.551 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.551 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.551 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.120 nvme0n1 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.120 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.121 16:53:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.690 nvme0n1 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.690 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.259 nvme0n1 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.259 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:17.520 16:53:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.520 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.520 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.091 nvme0n1 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.091 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.352 16:53:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.923 nvme0n1 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:18.923 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.924 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.924 16:53:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.862 nvme0n1 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.862 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.430 nvme0n1 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.430 16:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.430 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.689 nvme0n1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.689 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.949 nvme0n1 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.949 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.209 nvme0n1 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.209 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.210 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 nvme0n1 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 16:53:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 nvme0n1 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:21.729 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.730 nvme0n1 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.730 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.989 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.990 nvme0n1 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.990 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:22.249 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.250 nvme0n1 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.250 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:22.509 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.510 16:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.510 nvme0n1 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.510 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.771 nvme0n1 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.771 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.032 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.292 nvme0n1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.292 16:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.552 nvme0n1 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:23.552 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:23.553 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.553 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.553 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.812 nvme0n1 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.812 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.071 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.072 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.332 nvme0n1 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.332 16:53:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.592 nvme0n1 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.592 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.163 nvme0n1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.163 16:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.732 nvme0n1 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.732 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:25.733 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:25.733 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:25.733 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.733 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.733 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.992 nvme0n1 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.992 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.253 16:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.513 nvme0n1 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.513 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.082 nvme0n1 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTllNzlkMmUzOTYwZDZiYmQ0YjYwZmY3MWVjNDM4MDIdfmIP: 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTBhNzA4YTM0OWQ3YmQ5MzVkZDljNjdlNDE4MjZkNzVkMzg2MTQwNWFkY2QwMjc1NWM3N2VlMWNlZWRiYmUwZgY1pxs=: 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.082 16:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.021 nvme0n1 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:28.021 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:28.022 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.022 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.022 16:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.591 nvme0n1 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.591 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.852 16:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.423 nvme0n1 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWUzOTI5MWZhMDg1ZjQzYjQxMDVkOGNkNGYyYzVhMDE5MTcyMTZkMzUxNTE5NzdjUKVISg==: 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI1Zjg4MzMzNzY1YjE5MzNkOTgyNTRhYjJlZjZkNGISyfi2: 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:29.423 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:29.424 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.424 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.424 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 nvme0n1 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTA4NDc5MDc1Yzg3NTlhZDhlNGQwNzU4ZjZkYmZjYzNmNDNmYzM0M2Q1MWI4ZjRhZDE3NGZhMTY2NWYzYTZlOL+yy0I=: 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.362 16:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.933 nvme0n1 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.933 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.934 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 request: 00:28:31.195 { 00:28:31.195 "name": "nvme0", 00:28:31.195 "trtype": "tcp", 00:28:31.195 "traddr": "10.0.0.1", 00:28:31.195 "adrfam": "ipv4", 00:28:31.195 "trsvcid": "4420", 00:28:31.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:31.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:31.195 "prchk_reftag": false, 00:28:31.195 "prchk_guard": false, 00:28:31.195 "hdgst": false, 00:28:31.195 "ddgst": false, 00:28:31.195 "allow_unrecognized_csi": false, 00:28:31.195 "method": "bdev_nvme_attach_controller", 00:28:31.195 "req_id": 1 00:28:31.195 } 00:28:31.195 Got JSON-RPC error response 00:28:31.195 response: 00:28:31.195 { 00:28:31.195 "code": -5, 00:28:31.195 "message": "Input/output error" 00:28:31.195 } 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.195 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.196 request: 00:28:31.196 { 00:28:31.196 "name": "nvme0", 00:28:31.196 "trtype": "tcp", 00:28:31.196 "traddr": "10.0.0.1", 00:28:31.196 "adrfam": "ipv4", 00:28:31.196 "trsvcid": "4420", 00:28:31.196 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:31.196 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:31.196 "prchk_reftag": false, 00:28:31.196 "prchk_guard": false, 00:28:31.196 "hdgst": false, 00:28:31.196 "ddgst": false, 00:28:31.196 "dhchap_key": "key2", 00:28:31.196 "allow_unrecognized_csi": false, 00:28:31.196 "method": "bdev_nvme_attach_controller", 00:28:31.196 "req_id": 1 00:28:31.196 } 00:28:31.196 Got JSON-RPC error response 00:28:31.196 response: 00:28:31.196 { 00:28:31.196 "code": -5, 00:28:31.196 "message": "Input/output error" 00:28:31.196 } 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.196 request: 00:28:31.196 { 00:28:31.196 "name": "nvme0", 00:28:31.196 "trtype": "tcp", 00:28:31.196 "traddr": "10.0.0.1", 00:28:31.196 "adrfam": "ipv4", 00:28:31.196 "trsvcid": "4420", 00:28:31.196 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:31.196 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:31.196 "prchk_reftag": false, 00:28:31.196 "prchk_guard": false, 00:28:31.196 "hdgst": false, 00:28:31.196 "ddgst": false, 00:28:31.196 "dhchap_key": "key1", 00:28:31.196 "dhchap_ctrlr_key": "ckey2", 00:28:31.196 "allow_unrecognized_csi": false, 00:28:31.196 "method": "bdev_nvme_attach_controller", 00:28:31.196 "req_id": 1 00:28:31.196 } 00:28:31.196 Got JSON-RPC error response 00:28:31.196 response: 00:28:31.196 { 00:28:31.196 "code": -5, 00:28:31.196 "message": "Input/output error" 00:28:31.196 } 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.196 16:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.456 nvme0n1 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:31.456 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.457 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.716 request: 00:28:31.716 { 00:28:31.716 "name": "nvme0", 00:28:31.716 "dhchap_key": "key1", 00:28:31.716 "dhchap_ctrlr_key": "ckey2", 00:28:31.716 "method": "bdev_nvme_set_keys", 00:28:31.716 "req_id": 1 00:28:31.716 } 00:28:31.716 Got JSON-RPC error response 00:28:31.716 response: 00:28:31.716 { 00:28:31.716 "code": -13, 00:28:31.716 "message": "Permission denied" 00:28:31.716 } 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:31.716 16:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:32.655 16:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:33.594 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.594 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:33.594 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.594 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FlZTYxNjM1MmY4YmZhZjUyZmU0ZDc1NDg5YjhiMDdiYTQ4ZWFmYTkwZjhlNDk1Tfd84Q==: 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: ]] 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2E2MjdhYmFkYmFhNjlkMTc2Y2EwZTU3NThjNjQ5ZTBjZGU0ZTg2N2UyYjgxZGY0QqYBdg==: 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:33.854 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.855 nvme0n1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2EzNGZmNTNmYWI2NjMzMTBjM2M5NjVkYjczM2MyM2KQ3TUk: 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: ]] 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWYyOTBlNmM2YjhkMWJjOWU5ZmQ2MmUwYmIyYjY0MDF6fNOP: 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.855 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 request: 00:28:34.115 { 00:28:34.115 "name": "nvme0", 00:28:34.115 "dhchap_key": "key2", 00:28:34.115 "dhchap_ctrlr_key": "ckey1", 00:28:34.115 "method": "bdev_nvme_set_keys", 00:28:34.115 "req_id": 1 00:28:34.115 } 00:28:34.115 Got JSON-RPC error response 00:28:34.115 response: 00:28:34.115 { 00:28:34.115 "code": -13, 00:28:34.115 "message": "Permission denied" 00:28:34.115 } 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:34.115 16:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.058 rmmod nvme_tcp 00:28:35.058 rmmod nvme_fabrics 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2832650 ']' 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2832650 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2832650 ']' 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2832650 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.058 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2832650 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2832650' 00:28:35.318 killing process with pid 2832650 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2832650 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2832650 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:35.318 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.319 16:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:28:37.857 16:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:28:37.857 16:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:41.152 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:41.152 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:43.061 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:28:43.061 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8mm /tmp/spdk.key-null.GVx /tmp/spdk.key-sha256.zlz /tmp/spdk.key-sha384.zD9 /tmp/spdk.key-sha512.8sW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:43.061 16:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.356 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:46.356 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:46.356 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:46.616 00:28:46.616 real 1m4.817s 00:28:46.616 user 0m57.541s 00:28:46.616 sys 0m15.153s 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.616 ************************************ 00:28:46.616 END TEST nvmf_auth_host 00:28:46.616 ************************************ 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.616 ************************************ 00:28:46.616 START TEST nvmf_digest 00:28:46.616 ************************************ 00:28:46.616 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.876 * Looking for test storage... 00:28:46.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.876 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.877 --rc genhtml_branch_coverage=1 00:28:46.877 --rc genhtml_function_coverage=1 00:28:46.877 --rc genhtml_legend=1 00:28:46.877 --rc geninfo_all_blocks=1 00:28:46.877 --rc geninfo_unexecuted_blocks=1 00:28:46.877 00:28:46.877 ' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.877 --rc genhtml_branch_coverage=1 00:28:46.877 --rc genhtml_function_coverage=1 00:28:46.877 --rc genhtml_legend=1 00:28:46.877 --rc geninfo_all_blocks=1 00:28:46.877 --rc geninfo_unexecuted_blocks=1 00:28:46.877 00:28:46.877 ' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.877 --rc genhtml_branch_coverage=1 00:28:46.877 --rc genhtml_function_coverage=1 00:28:46.877 --rc genhtml_legend=1 00:28:46.877 --rc geninfo_all_blocks=1 00:28:46.877 --rc geninfo_unexecuted_blocks=1 00:28:46.877 00:28:46.877 ' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:46.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.877 --rc genhtml_branch_coverage=1 00:28:46.877 --rc genhtml_function_coverage=1 00:28:46.877 --rc genhtml_legend=1 00:28:46.877 --rc geninfo_all_blocks=1 00:28:46.877 --rc geninfo_unexecuted_blocks=1 00:28:46.877 00:28:46.877 ' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.877 16:53:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.055 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:55.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:55.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:55.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:55.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:28:55.056 00:28:55.056 --- 10.0.0.2 ping statistics --- 00:28:55.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.056 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:28:55.056 00:28:55.056 --- 10.0.0.1 ping statistics --- 00:28:55.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.056 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.056 ************************************ 00:28:55.056 START TEST nvmf_digest_clean 00:28:55.056 ************************************ 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:55.056 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2848793 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2848793 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2848793 ']' 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.057 16:53:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.057 [2024-10-01 16:53:45.899954] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:28:55.057 [2024-10-01 16:53:45.899994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.057 [2024-10-01 16:53:45.974638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.057 [2024-10-01 16:53:46.038636] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.057 [2024-10-01 16:53:46.038675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.057 [2024-10-01 16:53:46.038683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.057 [2024-10-01 16:53:46.038693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.057 [2024-10-01 16:53:46.038699] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.057 [2024-10-01 16:53:46.038718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.057 null0 00:28:55.057 [2024-10-01 16:53:46.210468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.057 [2024-10-01 16:53:46.234719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2848933 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2848933 /var/tmp/bperf.sock 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2848933 ']' 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.057 [2024-10-01 16:53:46.293907] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:28:55.057 [2024-10-01 16:53:46.293978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848933 ] 00:28:55.057 [2024-10-01 16:53:46.350408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.057 [2024-10-01 16:53:46.416856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:55.057 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:55.358 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.358 16:53:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.659 nvme0n1 00:28:55.659 16:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:55.659 16:53:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.659 Running I/O for 2 seconds... 00:28:57.536 22017.00 IOPS, 86.00 MiB/s 22129.00 IOPS, 86.44 MiB/s 00:28:57.536 Latency(us) 00:28:57.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.536 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:57.536 nvme0n1 : 2.00 22150.31 86.52 0.00 0.00 5773.85 2255.95 14821.22 00:28:57.536 =================================================================================================================== 00:28:57.536 Total : 22150.31 86.52 0.00 0.00 5773.85 2255.95 14821.22 00:28:57.536 { 00:28:57.536 "results": [ 00:28:57.536 { 00:28:57.536 "job": "nvme0n1", 00:28:57.536 "core_mask": "0x2", 00:28:57.536 "workload": "randread", 00:28:57.536 "status": "finished", 00:28:57.536 "queue_depth": 128, 00:28:57.536 "io_size": 4096, 00:28:57.536 "runtime": 2.003674, 00:28:57.536 "iops": 22150.309880749064, 00:28:57.536 "mibps": 86.52464797167603, 00:28:57.536 "io_failed": 0, 00:28:57.536 "io_timeout": 0, 00:28:57.536 "avg_latency_us": 5773.849147159452, 00:28:57.536 "min_latency_us": 2255.9507692307693, 00:28:57.536 "max_latency_us": 14821.218461538461 00:28:57.536 } 00:28:57.536 ], 00:28:57.536 "core_count": 1 00:28:57.536 } 00:28:57.536 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:57.536 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:57.536 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:57.536 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:57.536 | select(.opcode=="crc32c") 00:28:57.536 | "\(.module_name) \(.executed)"' 00:28:57.536 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2848933 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2848933 ']' 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2848933 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2848933 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2848933' 00:28:57.796 killing process with pid 2848933 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2848933 00:28:57.796 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.796 00:28:57.796 Latency(us) 00:28:57.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.796 =================================================================================================================== 00:28:57.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.796 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2848933 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2849440 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2849440 /var/tmp/bperf.sock 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2849440 ']' 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.056 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.056 [2024-10-01 16:53:49.602489] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:28:58.056 [2024-10-01 16:53:49.602547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849440 ] 00:28:58.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.056 Zero copy mechanism will not be used. 00:28:58.056 [2024-10-01 16:53:49.653808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.056 [2024-10-01 16:53:49.709721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.317 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.317 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:58.317 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:58.317 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:58.317 16:53:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.577 16:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.577 16:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.836 nvme0n1 00:28:58.836 16:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:58.836 16:53:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.836 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.836 Zero copy mechanism will not be used. 00:28:58.836 Running I/O for 2 seconds... 00:29:01.157 5744.00 IOPS, 718.00 MiB/s 5490.50 IOPS, 686.31 MiB/s 00:29:01.157 Latency(us) 00:29:01.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.157 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:01.157 nvme0n1 : 2.00 5492.44 686.55 0.00 0.00 2910.51 601.80 11998.13 00:29:01.157 =================================================================================================================== 00:29:01.157 Total : 5492.44 686.55 0.00 0.00 2910.51 601.80 11998.13 00:29:01.157 { 00:29:01.157 "results": [ 00:29:01.157 { 00:29:01.157 "job": "nvme0n1", 00:29:01.157 "core_mask": "0x2", 00:29:01.157 "workload": "randread", 00:29:01.157 "status": "finished", 00:29:01.157 "queue_depth": 16, 00:29:01.157 "io_size": 131072, 00:29:01.157 "runtime": 2.002207, 00:29:01.157 "iops": 5492.439093460366, 00:29:01.157 "mibps": 686.5548866825458, 00:29:01.157 "io_failed": 0, 00:29:01.157 "io_timeout": 0, 00:29:01.157 "avg_latency_us": 2910.514696455677, 00:29:01.157 "min_latency_us": 601.7969230769231, 00:29:01.157 "max_latency_us": 11998.129230769231 00:29:01.157 } 00:29:01.157 ], 00:29:01.157 "core_count": 1 00:29:01.157 } 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.157 | select(.opcode=="crc32c") 00:29:01.157 | "\(.module_name) \(.executed)"' 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2849440 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2849440 ']' 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2849440 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2849440 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2849440' 00:29:01.157 killing process with pid 2849440 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2849440 00:29:01.157 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.157 00:29:01.157 Latency(us) 00:29:01.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.157 =================================================================================================================== 00:29:01.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2849440 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2850035 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2850035 /var/tmp/bperf.sock 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2850035 ']' 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.157 16:53:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:01.419 [2024-10-01 16:53:52.875355] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:01.419 [2024-10-01 16:53:52.875423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850035 ] 00:29:01.419 [2024-10-01 16:53:52.932290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.419 [2024-10-01 16:53:52.986317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.419 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.419 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:01.419 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.419 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.419 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.678 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.678 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.249 nvme0n1 00:29:02.249 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:02.249 16:53:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.249 Running I/O for 2 seconds... 00:29:04.129 23251.00 IOPS, 90.82 MiB/s 23380.50 IOPS, 91.33 MiB/s 00:29:04.129 Latency(us) 00:29:04.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.129 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.129 nvme0n1 : 2.00 23412.19 91.45 0.00 0.00 5462.38 2180.33 16434.41 00:29:04.129 =================================================================================================================== 00:29:04.129 Total : 23412.19 91.45 0.00 0.00 5462.38 2180.33 16434.41 00:29:04.129 { 00:29:04.129 "results": [ 00:29:04.129 { 00:29:04.129 "job": "nvme0n1", 00:29:04.129 "core_mask": "0x2", 00:29:04.129 "workload": "randwrite", 00:29:04.129 "status": "finished", 00:29:04.129 "queue_depth": 128, 00:29:04.129 "io_size": 4096, 00:29:04.129 "runtime": 2.00276, 00:29:04.129 "iops": 23412.191176176875, 00:29:04.129 "mibps": 91.45387178194092, 00:29:04.129 "io_failed": 0, 00:29:04.129 "io_timeout": 0, 00:29:04.129 "avg_latency_us": 5462.384239308219, 00:29:04.129 "min_latency_us": 2180.3323076923075, 00:29:04.129 "max_latency_us": 16434.412307692306 00:29:04.129 } 00:29:04.129 ], 00:29:04.129 "core_count": 1 00:29:04.129 } 00:29:04.129 16:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.389 16:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.389 16:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.389 16:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.389 | select(.opcode=="crc32c") 00:29:04.389 | "\(.module_name) \(.executed)"' 00:29:04.389 16:53:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2850035 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2850035 ']' 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2850035 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.389 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850035 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850035' 00:29:04.650 killing process with pid 2850035 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2850035 00:29:04.650 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.650 00:29:04.650 Latency(us) 00:29:04.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.650 =================================================================================================================== 00:29:04.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2850035 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2850646 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2850646 /var/tmp/bperf.sock 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2850646 ']' 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.650 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.650 [2024-10-01 16:53:56.270861] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:04.650 [2024-10-01 16:53:56.270914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850646 ] 00:29:04.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.650 Zero copy mechanism will not be used. 00:29:04.650 [2024-10-01 16:53:56.322054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.910 [2024-10-01 16:53:56.376187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.910 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.910 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:04.910 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:04.910 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:04.910 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.170 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.170 16:53:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.430 nvme0n1 00:29:05.690 16:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:05.690 16:53:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.690 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.690 Zero copy mechanism will not be used. 00:29:05.690 Running I/O for 2 seconds... 00:29:07.573 3668.00 IOPS, 458.50 MiB/s 3714.50 IOPS, 464.31 MiB/s 00:29:07.573 Latency(us) 00:29:07.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.573 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:07.573 nvme0n1 : 2.00 3714.64 464.33 0.00 0.00 4300.79 1739.22 14115.45 00:29:07.573 =================================================================================================================== 00:29:07.573 Total : 3714.64 464.33 0.00 0.00 4300.79 1739.22 14115.45 00:29:07.573 { 00:29:07.573 "results": [ 00:29:07.573 { 00:29:07.573 "job": "nvme0n1", 00:29:07.573 "core_mask": "0x2", 00:29:07.573 "workload": "randwrite", 00:29:07.573 "status": "finished", 00:29:07.573 "queue_depth": 16, 00:29:07.573 "io_size": 131072, 00:29:07.573 "runtime": 2.004234, 00:29:07.573 "iops": 3714.636115343817, 00:29:07.573 "mibps": 464.3295144179771, 00:29:07.573 "io_failed": 0, 00:29:07.573 "io_timeout": 0, 00:29:07.573 "avg_latency_us": 4300.79365190887, 00:29:07.573 "min_latency_us": 1739.2246153846154, 00:29:07.573 "max_latency_us": 14115.446153846155 00:29:07.573 } 00:29:07.573 ], 00:29:07.573 "core_count": 1 00:29:07.573 } 00:29:07.573 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:07.573 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:07.573 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.573 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.573 | select(.opcode=="crc32c") 00:29:07.573 | "\(.module_name) \(.executed)"' 00:29:07.573 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2850646 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2850646 ']' 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2850646 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850646 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850646' 00:29:07.834 killing process with pid 2850646 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2850646 00:29:07.834 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.834 00:29:07.834 Latency(us) 00:29:07.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.834 =================================================================================================================== 00:29:07.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.834 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2850646 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2848793 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2848793 ']' 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2848793 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2848793 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2848793' 00:29:08.095 killing process with pid 2848793 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2848793 00:29:08.095 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2848793 00:29:08.356 00:29:08.356 real 0m13.919s 00:29:08.356 user 0m28.149s 00:29:08.356 sys 0m3.319s 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.356 ************************************ 00:29:08.356 END TEST nvmf_digest_clean 00:29:08.356 ************************************ 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:08.356 ************************************ 00:29:08.356 START TEST nvmf_digest_error 00:29:08.356 ************************************ 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2851294 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2851294 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2851294 ']' 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.356 16:53:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.356 [2024-10-01 16:53:59.916055] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:08.356 [2024-10-01 16:53:59.916103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.356 [2024-10-01 16:53:59.997165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.616 [2024-10-01 16:54:00.060653] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.616 [2024-10-01 16:54:00.060690] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.616 [2024-10-01 16:54:00.060697] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.616 [2024-10-01 16:54:00.060704] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.616 [2024-10-01 16:54:00.060709] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.616 [2024-10-01 16:54:00.060727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.185 [2024-10-01 16:54:00.774745] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.185 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.185 null0 00:29:09.185 [2024-10-01 16:54:00.854884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.444 [2024-10-01 16:54:00.879080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2851387 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2851387 /var/tmp/bperf.sock 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2851387 ']' 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.444 16:54:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.444 [2024-10-01 16:54:00.933397] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:09.444 [2024-10-01 16:54:00.933442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851387 ] 00:29:09.445 [2024-10-01 16:54:00.983565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.445 [2024-10-01 16:54:01.038194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.445 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.445 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:09.445 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.445 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.704 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.276 nvme0n1 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:10.276 16:54:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.276 Running I/O for 2 seconds... 00:29:10.276 [2024-10-01 16:54:01.817273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.817305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.817315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.827435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.827455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.827462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.840192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.840211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.840218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.850669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.850687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.850694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.862237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.862262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.875370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.875394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.884263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.884281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.884288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.897827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.897845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.897858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.910509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.910527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.910534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.921167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.921187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.921194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.932731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.932748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.932754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.945836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.945854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.945861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.276 [2024-10-01 16:54:01.954598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.276 [2024-10-01 16:54:01.954620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.276 [2024-10-01 16:54:01.954627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:01.966533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:01.966551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:01.966558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:01.978757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:01.978775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:01.978782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:01.991758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:01.991776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:01.991783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.005031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.005049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.005056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.017077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.017095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.017101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.030021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.030038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.030045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.041974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.041992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.041998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.054284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.054302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.054308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.066495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.066513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.066520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.077384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.077400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.087082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.087099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.087106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.099699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.099717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.099724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.111986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.112003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.112009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.123318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.123336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.123342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.135200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.135224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.144504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.144522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.144528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.156801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.156819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.156829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.169034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.169051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.169058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.181904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.181921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.181928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.193442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.193458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.193465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.204818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.204835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.204842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.537 [2024-10-01 16:54:02.216916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.537 [2024-10-01 16:54:02.216933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.537 [2024-10-01 16:54:02.216940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.228949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.228966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.228978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.240922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.240946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.252640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.252657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.252663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.262567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.262588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.262595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.274334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.274352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.274358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.286733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.286756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.286762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.298351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.298375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.310824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.798 [2024-10-01 16:54:02.310841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.798 [2024-10-01 16:54:02.310848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.798 [2024-10-01 16:54:02.321941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.321958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.321965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.332046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.332064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.332070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.344247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.344265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.344272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.355539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.355557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.355564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.366626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.366643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.366650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.379189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.379208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.379216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.390711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.390728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.390734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.402692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.402716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.411648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.411666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.411673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.423689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.423705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.423712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.434902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.434919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.434926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.447924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.447941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.447947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.458493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.458511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.458521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.799 [2024-10-01 16:54:02.470955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:10.799 [2024-10-01 16:54:02.470979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.799 [2024-10-01 16:54:02.470986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.481238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.481261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.493685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.493703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.493710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.505054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.505071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.505078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.516657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.516676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.516689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.530735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.530752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.542774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.542792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.542799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.554073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.554091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.554097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.565770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.565788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.565795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.577507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.577525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.577532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.588346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.588364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.588370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.599071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.599090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.599096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.610783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.610801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.610807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.623386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.623406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.623413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.635217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.635235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.635241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.060 [2024-10-01 16:54:02.645860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.060 [2024-10-01 16:54:02.645877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.060 [2024-10-01 16:54:02.645884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.657732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.657749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.657759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.670213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.670231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.670237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.683034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.683051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.683057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.694064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.694081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.694088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.704540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.704557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.704564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.715898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.715915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.715922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.728015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.728033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.061 [2024-10-01 16:54:02.739920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.061 [2024-10-01 16:54:02.739937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.061 [2024-10-01 16:54:02.739944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.753141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.753158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.753165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.766210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.766231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.766237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.777623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.777639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.777646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.788648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.788665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.788672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 21540.00 IOPS, 84.14 MiB/s [2024-10-01 16:54:02.797854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.797872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.797879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.811628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.811645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.811652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.824212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.824230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.824236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.838022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.838040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.838046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.850308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.850326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.850332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.860305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.860323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.860329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.872231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.872249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.872255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.885633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.885651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.885658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.896263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.896280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.896287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.906754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.906771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.906777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.919021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.919040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.919047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.930345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.930364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.930370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.941247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.941265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.941272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.952743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.952760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.952768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.964141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.964159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.347 [2024-10-01 16:54:02.964169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.347 [2024-10-01 16:54:02.975456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.347 [2024-10-01 16:54:02.975473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.348 [2024-10-01 16:54:02.975480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.348 [2024-10-01 16:54:02.988200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.348 [2024-10-01 16:54:02.988218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.348 [2024-10-01 16:54:02.988224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.348 [2024-10-01 16:54:03.001155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.348 [2024-10-01 16:54:03.001175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.348 [2024-10-01 16:54:03.001182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.348 [2024-10-01 16:54:03.014282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.348 [2024-10-01 16:54:03.014301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.348 [2024-10-01 16:54:03.014308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.348 [2024-10-01 16:54:03.027012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.348 [2024-10-01 16:54:03.027030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.348 [2024-10-01 16:54:03.027037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.041230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.041248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.041255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.053753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.053770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.053777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.066339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.066356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.076506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.076523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.076530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.090784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.090805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.090812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.102355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.102372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.102379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.113987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.114004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.114010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.126092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.126109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.126116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.136877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.136895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.136902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.148586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.148603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.160628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.160645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.160652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.172878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.172896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.184871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.184888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.184894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.193741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.193759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.193766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.207345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.207370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.218473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.218490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.229613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.229636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.241156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.241173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.241180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.253212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.253229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.253235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.265396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.265413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.265419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.276307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.276327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.276334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.609 [2024-10-01 16:54:03.288111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.609 [2024-10-01 16:54:03.288128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.609 [2024-10-01 16:54:03.288134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.299601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.299618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.299625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.310346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.310363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.310369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.322138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.322155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.322161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.333837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.333854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.333861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.345118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.345135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.345141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.356983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.357006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.368896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.368920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.378945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.378962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.378972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.391982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.391999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.403379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.403399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.403406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.414582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.414600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.414608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.426915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.426932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.438844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.438861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.438868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.450260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.450277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.450284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.461077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.461093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.461100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.473238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.473256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.473266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.485333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.485358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.497571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.497589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.497596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.508167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.508185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.508199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.518537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.518554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.518561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.530696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.530713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.530720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.870 [2024-10-01 16:54:03.543655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:11.870 [2024-10-01 16:54:03.543673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.870 [2024-10-01 16:54:03.543679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.554586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.554611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.566948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.566965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.566975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.577970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.577987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.577993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.589846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.589862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.601904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.601920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.601927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.612730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.612747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.612753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.623565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.623582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.623589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.635420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.635437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.635443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.647177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.647193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.647200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.657398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.657415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.669919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.669936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.131 [2024-10-01 16:54:03.669945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.131 [2024-10-01 16:54:03.681737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.131 [2024-10-01 16:54:03.681753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.691407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.691423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.691429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.703229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.703252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.717044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.717061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.717067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.729517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.729534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.729540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.740585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.740602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.740608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.751366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.751383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.751389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.762332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.762350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.762357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.773934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.773955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.773961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 [2024-10-01 16:54:03.785425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.785442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.785448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.132 21646.00 IOPS, 84.55 MiB/s [2024-10-01 16:54:03.797888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3cc10) 00:29:12.132 [2024-10-01 16:54:03.797905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.132 [2024-10-01 16:54:03.797912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.393 00:29:12.393 Latency(us) 00:29:12.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.393 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.393 nvme0n1 : 2.04 21237.11 82.96 0.00 0.00 5901.02 2230.74 48194.17 00:29:12.393 =================================================================================================================== 00:29:12.393 Total : 21237.11 82.96 0.00 0.00 5901.02 2230.74 48194.17 00:29:12.393 { 00:29:12.393 "results": [ 00:29:12.393 { 00:29:12.393 "job": "nvme0n1", 00:29:12.393 "core_mask": "0x2", 00:29:12.393 "workload": "randread", 00:29:12.393 "status": "finished", 00:29:12.393 "queue_depth": 128, 00:29:12.393 "io_size": 4096, 00:29:12.393 "runtime": 2.044534, 00:29:12.393 "iops": 21237.113200367417, 00:29:12.393 "mibps": 82.95747343893522, 00:29:12.393 "io_failed": 0, 00:29:12.393 "io_timeout": 0, 00:29:12.393 "avg_latency_us": 5901.019381922546, 00:29:12.393 "min_latency_us": 2230.7446153846154, 00:29:12.393 "max_latency_us": 48194.166153846156 00:29:12.393 } 00:29:12.393 ], 00:29:12.393 "core_count": 1 00:29:12.393 } 00:29:12.393 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:12.393 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:12.393 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:12.393 | .driver_specific 00:29:12.393 | .nvme_error 00:29:12.393 | .status_code 00:29:12.393 | .command_transient_transport_error' 00:29:12.393 16:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2851387 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2851387 ']' 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2851387 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851387 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851387' 00:29:12.393 killing process with pid 2851387 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2851387 00:29:12.393 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.393 00:29:12.393 Latency(us) 00:29:12.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.393 =================================================================================================================== 00:29:12.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.393 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2851387 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2852057 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2852057 /var/tmp/bperf.sock 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2852057 ']' 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.654 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.654 [2024-10-01 16:54:04.232127] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:12.654 [2024-10-01 16:54:04.232178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852057 ] 00:29:12.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.654 Zero copy mechanism will not be used. 00:29:12.654 [2024-10-01 16:54:04.282784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.915 [2024-10-01 16:54:04.337726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.915 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.915 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:12.915 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.915 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.175 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.436 nvme0n1 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:13.436 16:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.436 Zero copy mechanism will not be used. 00:29:13.436 Running I/O for 2 seconds... 00:29:13.436 [2024-10-01 16:54:05.061236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.061268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.061277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.070883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.070906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.070913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.078229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.078248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.078255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.086022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.086041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.086048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.093773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.093792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.093804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.100535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.100554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.100561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.107056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.107075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.107081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.436 [2024-10-01 16:54:05.113913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.436 [2024-10-01 16:54:05.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.436 [2024-10-01 16:54:05.113938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.122685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.122704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.122711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.131158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.131176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.131187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.139300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.139318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.139325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.149694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.149712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.149719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.155173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.155196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.155206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.163552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.163574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.163581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.696 [2024-10-01 16:54:05.171691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.696 [2024-10-01 16:54:05.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.696 [2024-10-01 16:54:05.171716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.178579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.178597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.178604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.189011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.189029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.189035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.199324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.199342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.199349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.209397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.209421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.209428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.216208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.216226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.216233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.226167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.226188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.226196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.234646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.234665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.234672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.244019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.244037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.244044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.247635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.247653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.247660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.252214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.252233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.252239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.263266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.263284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.263291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.271288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.271307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.271314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.281180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.281203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.281209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.291732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.291751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.291758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.301915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.301934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.301940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.308733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.308752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.308762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.313629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.313646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.313653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.321283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.321309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.329100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.329118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.338061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.338078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.338085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.348352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.348371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.348377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.357843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.357860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.357867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.697 [2024-10-01 16:54:05.368412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.697 [2024-10-01 16:54:05.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.697 [2024-10-01 16:54:05.368437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.379283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.379308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.390412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.390435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.390441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.401899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.401918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.401925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.413719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.413738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.413745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.425645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.425664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.425671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.434434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.434453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.434459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.442984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.443003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.443009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.452548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.452567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.452573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.463400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.463419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.463426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.474594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.474613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.474623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.485036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.485055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.494767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.494793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.494801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.505053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.505072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.505078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.514990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.515008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.515015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.526600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.526617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.526625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.536960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.957 [2024-10-01 16:54:05.536993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.957 [2024-10-01 16:54:05.537001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.957 [2024-10-01 16:54:05.547618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.547637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.547643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.558500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.558519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.568147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.568169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.576424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.576443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.585449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.585468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.585474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.596131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.596149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.596156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.604823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.604848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.615041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.615059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.615066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.624101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.624120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.624127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.958 [2024-10-01 16:54:05.634342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:13.958 [2024-10-01 16:54:05.634361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.958 [2024-10-01 16:54:05.634368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.644070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.644089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.653862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.653880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.653887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.663180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.672939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.672957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.672964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.684120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.684139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.684146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.692433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.692451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.692458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.218 [2024-10-01 16:54:05.702255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.218 [2024-10-01 16:54:05.702274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.218 [2024-10-01 16:54:05.702280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.712645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.712664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.712671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.722620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.722640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.732305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.732324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.732334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.741656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.741675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.741681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.749796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.749815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.749822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.761252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.761271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.761278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.771413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.771432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.771438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.778922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.778941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.778948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.787374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.787393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.787405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.795770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.795788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.806315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.806334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.817236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.817258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.817265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.826592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.826615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.826626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.836813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.836833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.836839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.848389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.848415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.848425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.857430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.857449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.857456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.866047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.866066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.866072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.877130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.877148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.877155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.887072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.887091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.887098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.219 [2024-10-01 16:54:05.894282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.219 [2024-10-01 16:54:05.894300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.219 [2024-10-01 16:54:05.894307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.479 [2024-10-01 16:54:05.904353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.904372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.904379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.911149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.911174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.921514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.921533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.921540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.932292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.932311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.940899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.940918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.951746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.951764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.951771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.961965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.961989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.961995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.972804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.972822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.972828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.984437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.984455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.984465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:05.993320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:05.993340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:05.993348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.003693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.003712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.003719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.012992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.013011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.013018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.022519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.022540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.022554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.030583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.030602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.030610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.040096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.040115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.040121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 3296.00 IOPS, 412.00 MiB/s [2024-10-01 16:54:06.051518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.051537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.051544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.062618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.062643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.073586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.073604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.085135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.085160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.096884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.096903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.096910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.105744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.105763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.105769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.114072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.114091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.114097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.123348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.123366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.123373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.132767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.132785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.132792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.138784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.138802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.138809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.148493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.148512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.148522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.480 [2024-10-01 16:54:06.159467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.480 [2024-10-01 16:54:06.159485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.480 [2024-10-01 16:54:06.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.171362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.171381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.171387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.183049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.183068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.183075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.193839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.193858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.193865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.202636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.202654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.202661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.214130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.214156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.225385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.225403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.225410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.236684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.236702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.236709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.244050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.244072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.255098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.255117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.264848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.264873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.274761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.274780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.274786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.284677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.284696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.284703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.295736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.295755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.295762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.305253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.305272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.305279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.741 [2024-10-01 16:54:06.316475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.741 [2024-10-01 16:54:06.316494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.741 [2024-10-01 16:54:06.316501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.327919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.327938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.327945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.337580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.337603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.337610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.347402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.347421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.358178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.358204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.358211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.368209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.368231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.368238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.378500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.378519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.378526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.389472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.389491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.389498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.399358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.399377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.399385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.409174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.409208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.742 [2024-10-01 16:54:06.419847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:14.742 [2024-10-01 16:54:06.419867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.742 [2024-10-01 16:54:06.419876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.430583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.430603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.430618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.439928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.439947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.439954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.449111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.449130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.449137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.458536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.458555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.458562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.470428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.470448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.470454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.481710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.002 [2024-10-01 16:54:06.481736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.002 [2024-10-01 16:54:06.493215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.002 [2024-10-01 16:54:06.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.493240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.502027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.502046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.502052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.511849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.511875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.522940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.522959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.522965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.534571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.534592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.534603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.546838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.546857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.546864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.558276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.558296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.558302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.569567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.569586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.569593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.581299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.581319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.581326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.592500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.592519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.592526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.602000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.602019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.602026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.610122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.610142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.610149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.620486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.620506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.620512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.630118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.630138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.630144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.637749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.637768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.637775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.648848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.648867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.648874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.660578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.660598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.660605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.671121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.671141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.671147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.003 [2024-10-01 16:54:06.680042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.003 [2024-10-01 16:54:06.680061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.003 [2024-10-01 16:54:06.680068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.692125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.692145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.692155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.703461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.703480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.703486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.714728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.714747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.726539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.726559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.726566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.737992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.738011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.738017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.748161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.748191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.758596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.758621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.758631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.770162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.770182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.770192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.779993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.780013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.780019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.789545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.789565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.789571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.799479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.799498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.799505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.810046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.810073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.810080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.820667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.820686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.820693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.831766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.831785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.831792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.843156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.843175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.843181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.853216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.853236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.853243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.861542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.861562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.861568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.870168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.870189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.870206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.880555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.264 [2024-10-01 16:54:06.880573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.264 [2024-10-01 16:54:06.880579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.264 [2024-10-01 16:54:06.889613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.889632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.889639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.265 [2024-10-01 16:54:06.900189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.900208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.900214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.265 [2024-10-01 16:54:06.911606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.911626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.911632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.265 [2024-10-01 16:54:06.922561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.922580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.922587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.265 [2024-10-01 16:54:06.932987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.933006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.933013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.265 [2024-10-01 16:54:06.942273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.265 [2024-10-01 16:54:06.942292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.265 [2024-10-01 16:54:06.942299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.952468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.952488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.952494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.959825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.959847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.959854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.970070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.970089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.980171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.980192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.980204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.988192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.988211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.988218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:06.999328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:06.999347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:06.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:07.008553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:07.008572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:07.008579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:07.019444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:07.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:07.019470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:07.027678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:07.027696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:07.027703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:07.036492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:07.036511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:07.036517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.526 [2024-10-01 16:54:07.046363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f37660) 00:29:15.526 [2024-10-01 16:54:07.046382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-01 16:54:07.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.526 3168.00 IOPS, 396.00 MiB/s 00:29:15.526 Latency(us) 00:29:15.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.526 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:15.526 nvme0n1 : 2.00 3169.41 396.18 0.00 0.00 5045.69 1127.98 13308.85 00:29:15.526 =================================================================================================================== 00:29:15.526 Total : 3169.41 396.18 0.00 0.00 5045.69 1127.98 13308.85 00:29:15.526 { 00:29:15.526 "results": [ 00:29:15.526 { 00:29:15.526 "job": "nvme0n1", 00:29:15.526 "core_mask": "0x2", 00:29:15.526 "workload": "randread", 00:29:15.526 "status": "finished", 00:29:15.526 "queue_depth": 16, 00:29:15.526 "io_size": 131072, 00:29:15.526 "runtime": 2.004791, 00:29:15.526 "iops": 3169.4076838932338, 00:29:15.526 "mibps": 396.1759604866542, 00:29:15.526 "io_failed": 0, 00:29:15.526 "io_timeout": 0, 00:29:15.526 "avg_latency_us": 5045.692127551391, 00:29:15.526 "min_latency_us": 1127.9753846153847, 00:29:15.526 "max_latency_us": 13308.84923076923 00:29:15.526 } 00:29:15.526 ], 00:29:15.526 "core_count": 1 00:29:15.526 } 00:29:15.526 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:15.526 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:15.526 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:15.526 | .driver_specific 00:29:15.526 | .nvme_error 00:29:15.526 | .status_code 00:29:15.526 | .command_transient_transport_error' 00:29:15.526 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2852057 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2852057 ']' 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2852057 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2852057 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2852057' 00:29:15.787 killing process with pid 2852057 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2852057 00:29:15.787 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.787 00:29:15.787 Latency(us) 00:29:15.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.787 =================================================================================================================== 00:29:15.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2852057 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2852668 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2852668 /var/tmp/bperf.sock 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2852668 ']' 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.787 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.047 [2024-10-01 16:54:07.491295] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:16.047 [2024-10-01 16:54:07.491345] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852668 ] 00:29:16.047 [2024-10-01 16:54:07.541798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.047 [2024-10-01 16:54:07.596216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.047 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:16.047 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:16.047 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.047 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.307 16:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.566 nvme0n1 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.566 16:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.826 Running I/O for 2 seconds... 00:29:16.826 [2024-10-01 16:54:08.364548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f3a28 00:29:16.826 [2024-10-01 16:54:08.365452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.365479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.376834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f7100 00:29:16.826 [2024-10-01 16:54:08.378159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.378178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.386069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e0630 00:29:16.826 [2024-10-01 16:54:08.386905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.386921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.398728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e73e0 00:29:16.826 [2024-10-01 16:54:08.400030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.400047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.410691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e73e0 00:29:16.826 [2024-10-01 16:54:08.412312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.412329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.421872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f9f68 00:29:16.826 [2024-10-01 16:54:08.423434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.423450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.431177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f92c0 00:29:16.826 [2024-10-01 16:54:08.432311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.432327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:16.826 [2024-10-01 16:54:08.444373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e2c28 00:29:16.826 [2024-10-01 16:54:08.446106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.826 [2024-10-01 16:54:08.446123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:16.827 [2024-10-01 16:54:08.454218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ef6a8 00:29:16.827 [2024-10-01 16:54:08.455425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.827 [2024-10-01 16:54:08.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:16.827 [2024-10-01 16:54:08.464579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f7da8 00:29:16.827 [2024-10-01 16:54:08.465705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.827 [2024-10-01 16:54:08.465720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.827 [2024-10-01 16:54:08.476393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e5220 00:29:16.827 [2024-10-01 16:54:08.477762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.827 [2024-10-01 16:54:08.477778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:16.827 [2024-10-01 16:54:08.486170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ef6a8 00:29:16.827 [2024-10-01 16:54:08.487100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.827 [2024-10-01 16:54:08.487116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:16.827 [2024-10-01 16:54:08.498399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f46d0 00:29:16.827 [2024-10-01 16:54:08.499778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.827 [2024-10-01 16:54:08.499794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.510282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f2d80 00:29:17.088 [2024-10-01 16:54:08.511822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.511839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.519126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f9f68 00:29:17.088 [2024-10-01 16:54:08.520025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.520041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.529814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e49b0 00:29:17.088 [2024-10-01 16:54:08.530740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.541548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e8d30 00:29:17.088 [2024-10-01 16:54:08.542324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.554513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f7da8 00:29:17.088 [2024-10-01 16:54:08.556179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.556195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.563361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f0788 00:29:17.088 [2024-10-01 16:54:08.564441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.564458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.573875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e12d8 00:29:17.088 [2024-10-01 16:54:08.574822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.574838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.585749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e3498 00:29:17.088 [2024-10-01 16:54:08.586393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.586409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.595855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ecc78 00:29:17.088 [2024-10-01 16:54:08.596798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.596814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.606662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198e3d08 00:29:17.088 [2024-10-01 16:54:08.607584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.620000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ebb98 00:29:17.088 [2024-10-01 16:54:08.621493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.621509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.628871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198f7538 00:29:17.088 [2024-10-01 16:54:08.629803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.629819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.642553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198de038 00:29:17.088 [2024-10-01 16:54:08.644230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.644247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.651744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198fd208 00:29:17.088 [2024-10-01 16:54:08.652983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.652999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.663035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.663294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.663310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.674358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.674478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.674494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.685670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.685883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.685900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.696987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.697226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.697242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.708297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.708535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.708551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.719686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.719930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.731008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.731251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.731266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.742324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.742565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.742581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.753653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.753902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.753919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.088 [2024-10-01 16:54:08.764958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.088 [2024-10-01 16:54:08.765217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.088 [2024-10-01 16:54:08.765233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.348 [2024-10-01 16:54:08.776271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.776520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.776537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.787591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.787831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.787847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.798901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.799145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.799161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.810223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.810454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.821538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.821778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.821796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.832848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.833098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.833115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.844164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.844422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.844439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.855480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.855705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.855722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.866782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.867032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.867048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.878101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.878346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.878362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.889409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.889631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.889647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.900684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.900934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.900950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.912028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.912150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.912166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.923453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.923674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.923693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.934750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.934992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.935008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.946040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.946293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.946309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.957362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.957608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.957626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.968675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.968916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.968933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.980003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.980239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.980254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:08.991317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:08.991582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:08.991602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:09.002620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:09.002854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:09.002869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:09.013920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:09.014176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:09.014200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.349 [2024-10-01 16:54:09.025230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.349 [2024-10-01 16:54:09.025474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.349 [2024-10-01 16:54:09.025490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.036527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.036755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.036771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.047832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.048080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.048097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.059137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.059402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.070419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.070656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.070673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.081721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.081962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.081985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.093039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.093279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.093296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.104566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.104826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.104843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.115876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.116006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.127206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.127460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.127476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.138490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.138708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.138723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.149821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.150074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.150090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.161136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.161359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.161375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.172441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.172686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.172701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.183749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.183974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.183989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.195069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.195285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.195300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.206366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.206605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.206620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.217679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.217910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.217928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.229002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.229239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.229255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.240290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.240526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.251586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.251829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.262883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.263128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.274184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.274430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.610 [2024-10-01 16:54:09.274446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.610 [2024-10-01 16:54:09.285493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.610 [2024-10-01 16:54:09.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.611 [2024-10-01 16:54:09.285629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.296796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.296913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.296929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.308104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.308327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.308342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.319394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.319635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.319650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.330715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.330950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.330965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.342021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.342255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.342270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.353325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 22656.00 IOPS, 88.50 MiB/s [2024-10-01 16:54:09.353844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.364626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.364857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.364872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.375920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.871 [2024-10-01 16:54:09.387202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.871 [2024-10-01 16:54:09.387451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.871 [2024-10-01 16:54:09.387474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.398537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.398793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.409815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.410038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.410053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.421149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.421392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.421408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.432457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.432682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.432697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.443748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.443987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.444003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.455053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.455299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.455316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.466367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.466605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.466621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.477663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.477906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.477922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.489001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.489255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.489271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.500305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.500550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.500565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.511610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.511861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.511880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.522933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.523173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.534256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.534504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.534520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:17.872 [2024-10-01 16:54:09.545542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:17.872 [2024-10-01 16:54:09.545790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.872 [2024-10-01 16:54:09.545806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.556842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.557071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.557086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.568149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.568393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.568408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.579450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.579682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.579697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.590764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.591007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.591022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.602070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.602322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.602338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.613355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.613591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.613608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.624671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.624923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.624940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.635975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.636220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.636236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.647274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.647516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.647532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.658578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.658805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.658824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.669876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.670129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.670145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.681202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.681442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.681457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.692513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.692758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.692774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.134 [2024-10-01 16:54:09.703816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.134 [2024-10-01 16:54:09.704073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.134 [2024-10-01 16:54:09.704089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.715117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.715345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.715360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.726424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.726664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.726679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.737722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.737966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.737986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.749096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.749331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.749347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.760410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.760648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.771724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.771962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.771983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.783036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.783300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.794331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.794571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.135 [2024-10-01 16:54:09.805607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.135 [2024-10-01 16:54:09.805862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.135 [2024-10-01 16:54:09.805881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.816922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.817157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.817173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.828229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.828485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.828500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.839555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.839783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.839799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.850824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.851078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.851094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.862150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.862386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.862401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.873432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.873659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.873674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.884752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.884985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.885001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.896068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.896328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.907365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.907608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.907625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.918663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.918908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.918923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.930050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.930308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.930324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.941371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.941606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.941622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.952669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.952910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.952926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.963967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.396 [2024-10-01 16:54:09.964224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.396 [2024-10-01 16:54:09.964239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.396 [2024-10-01 16:54:09.975269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:09.975530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:09.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:09.986552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:09.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:09.986813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:09.997865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:09.998108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:09.998130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.009642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.009896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.009913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.020957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.021189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.021205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.032286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.032508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.032523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.043594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.043829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.043846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.054923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.055200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.066280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.066541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.066557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.397 [2024-10-01 16:54:10.077634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.397 [2024-10-01 16:54:10.077875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.397 [2024-10-01 16:54:10.077899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.088949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.089224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.089239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.100456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.100711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.100732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.111776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.112026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.112049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.123136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.123368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.123384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.134459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.134699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.134716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.145782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.146007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.146022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.157111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.157361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.157377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.168407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.168635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.168650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.179725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.179949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.179965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.191043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.191269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.191284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.202350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.202564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.202581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.213678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.213906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.225030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.225269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.225285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.236336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.236580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.236596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.247645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.247867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.247882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.258962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.259189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.259204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.270265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.270494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.270510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.281578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.281825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.281841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.292919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.293163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.304248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.304494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.304514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.315536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.315786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.315808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.326893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.327126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.327142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.658 [2024-10-01 16:54:10.338170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.658 [2024-10-01 16:54:10.338386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.658 [2024-10-01 16:54:10.338401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.918 [2024-10-01 16:54:10.349507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.918 [2024-10-01 16:54:10.349734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.918 [2024-10-01 16:54:10.349757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.918 22619.50 IOPS, 88.36 MiB/s [2024-10-01 16:54:10.360762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0470) with pdu=0x2000198ddc00 00:29:18.918 [2024-10-01 16:54:10.360965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.918 [2024-10-01 16:54:10.360982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.919 00:29:18.919 Latency(us) 00:29:18.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.919 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.919 nvme0n1 : 2.01 22616.44 88.35 0.00 0.00 5648.13 2192.94 12905.55 00:29:18.919 =================================================================================================================== 00:29:18.919 Total : 22616.44 88.35 0.00 0.00 5648.13 2192.94 12905.55 00:29:18.919 { 00:29:18.919 "results": [ 00:29:18.919 { 00:29:18.919 "job": "nvme0n1", 00:29:18.919 "core_mask": "0x2", 00:29:18.919 "workload": "randwrite", 00:29:18.919 "status": "finished", 00:29:18.919 "queue_depth": 128, 00:29:18.919 "io_size": 4096, 00:29:18.919 "runtime": 2.006991, 00:29:18.919 "iops": 22616.444219231675, 00:29:18.919 "mibps": 88.34548523137373, 00:29:18.919 "io_failed": 0, 00:29:18.919 "io_timeout": 0, 00:29:18.919 "avg_latency_us": 5648.1345470382985, 00:29:18.919 "min_latency_us": 2192.9353846153845, 00:29:18.919 "max_latency_us": 12905.55076923077 00:29:18.919 } 00:29:18.919 ], 00:29:18.919 "core_count": 1 00:29:18.919 } 00:29:18.919 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.919 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.919 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.919 | .driver_specific 00:29:18.919 | .nvme_error 00:29:18.919 | .status_code 00:29:18.919 | .command_transient_transport_error' 00:29:18.919 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 178 > 0 )) 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2852668 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2852668 ']' 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2852668 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2852668 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2852668' 00:29:19.178 killing process with pid 2852668 00:29:19.178 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2852668 00:29:19.178 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.178 00:29:19.178 Latency(us) 00:29:19.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.178 =================================================================================================================== 00:29:19.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2852668 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2853593 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2853593 /var/tmp/bperf.sock 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2853593 ']' 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.179 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.179 [2024-10-01 16:54:10.807368] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:19.179 [2024-10-01 16:54:10.807422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853593 ] 00:29:19.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.179 Zero copy mechanism will not be used. 00:29:19.179 [2024-10-01 16:54:10.857511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.438 [2024-10-01 16:54:10.911899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.438 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.438 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:19.438 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.438 16:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.698 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.958 nvme0n1 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.958 16:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.219 Zero copy mechanism will not be used. 00:29:20.219 Running I/O for 2 seconds... 00:29:20.219 [2024-10-01 16:54:11.720739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.721077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.721105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.729933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.730262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.730287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.738858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.739184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.739202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.748263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.748560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.748578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.756436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.756749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.756767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.765055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.765353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.765370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.774115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.774441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.774458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.783739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.784057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.784074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.791607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.791910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.791927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.800899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.801217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.801234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.810454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.810792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.820611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.820912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.820930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.829321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.829660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.829677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.219 [2024-10-01 16:54:11.837945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.219 [2024-10-01 16:54:11.838288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-10-01 16:54:11.838305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.846212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.846536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.846554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.853622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.853929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.853946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.863850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.864165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.864182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.871752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.872112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.872129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.879281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.879479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.879495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.888282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.888583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.888601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.220 [2024-10-01 16:54:11.894480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.220 [2024-10-01 16:54:11.894677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-10-01 16:54:11.894694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.480 [2024-10-01 16:54:11.903194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.903531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.903548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.912241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.912606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.912623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.922463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.922779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.922797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.932808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.933194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.943725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.944056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.944073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.955206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.955565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.955582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.966230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.966541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.966565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.975000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.975317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.975336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.983554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.983775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.983789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.990508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.990708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.990724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:11.999246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:11.999559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:11.999576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.004351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.004564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.009450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.009649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.009665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.014724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.015035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.015053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.019279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.019481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.027772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.027980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.027998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.031846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.032049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.032066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.035804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.036006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.036022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.039751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.039946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.039962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.043912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.044113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.044130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.050554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.050874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.050892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.057060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.057258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.057274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.061030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.061228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.061244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.065156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.065355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.065371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.070285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.070613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.077848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.078192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.078209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.083667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.481 [2024-10-01 16:54:12.083867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-10-01 16:54:12.083883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.481 [2024-10-01 16:54:12.090289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.090487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.090503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.098369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.098568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.098584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.104737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.104943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.104959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.110236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.110423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.117681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.117976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.117993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.124761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.124946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.124965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.129259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.129507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.129523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.136174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.136359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.136376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.143059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.143245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.143261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.151319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.151506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.151523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.482 [2024-10-01 16:54:12.156992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.482 [2024-10-01 16:54:12.157179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-10-01 16:54:12.157195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.163008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.163212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.168498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.168786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.168803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.175586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.175762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.175778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.180848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.187014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.187387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.187404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.194291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.194465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.194482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.199403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.199709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.199726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.208198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.208459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.208476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.215277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.215453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.215469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.222247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.222570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.222587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.230794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.230981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.230997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.239345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.239651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.244700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.244873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.244889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.250629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.250803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.250820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.256762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.256939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.256955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.264881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.265071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.265088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.272630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.272807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.272824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.280766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.280941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.285127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.285301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.285318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.290382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.290556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.290571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.298243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.298459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.298479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.304414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.304592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.304608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.310456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.310676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.319472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.319799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.319816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.325365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.325540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.743 [2024-10-01 16:54:12.325556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.743 [2024-10-01 16:54:12.333286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.743 [2024-10-01 16:54:12.333460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.333476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.338885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.339066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.339082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.343342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.343517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.343534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.347172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.347349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.347365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.351229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.351409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.351425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.355227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.355401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.355418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.359238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.359416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.359432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.362895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.363092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.366823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.367003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.367020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.370798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.370977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.370993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.376212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.376388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.376404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.381106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.381283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.381300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.385239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.385415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.385431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.388955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.389140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.389156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.392839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.393038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.398882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.399063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.399079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.407842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.408199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.744 [2024-10-01 16:54:12.417420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:20.744 [2024-10-01 16:54:12.417630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.744 [2024-10-01 16:54:12.417646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.426522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.426933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.005 [2024-10-01 16:54:12.426950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.434131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.434306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.005 [2024-10-01 16:54:12.434323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.443370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.443622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.005 [2024-10-01 16:54:12.443638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.451874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.005 [2024-10-01 16:54:12.452083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.455918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.456094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.005 [2024-10-01 16:54:12.456111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.005 [2024-10-01 16:54:12.460413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.005 [2024-10-01 16:54:12.460596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.460612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.465254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.465419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.465435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.473142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.473309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.473326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.477697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.477924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.484276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.484444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.484461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.488289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.488454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.492300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.492468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.492485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.496436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.496605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.496621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.503826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.503984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.504001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.507295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.507450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.507467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.510938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.511098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.511114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.515104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.515265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.515280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.520608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.520773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.520789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.526867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.527032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.527048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.531121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.531279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.531295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.536435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.536691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.536711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.542260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.542416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.546040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.546194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.546211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.549529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.549689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.549705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.553110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.553286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.556428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.556586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.556603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.559994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.560152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.560169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.563160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.563323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.563339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.566377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.566537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.566553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.569559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.569721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.569737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.572733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.572891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.572907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.577804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.577967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.006 [2024-10-01 16:54:12.577988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.006 [2024-10-01 16:54:12.582319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.006 [2024-10-01 16:54:12.582557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.582573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.586518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.586678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.586693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.589640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.589801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.589817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.592739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.592898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.592913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.595864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.596025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.596042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.599399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.599590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.599606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.606435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.606592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.606609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.614599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.614904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.614920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.625142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.625371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.625386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.635520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.635757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.635772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.645890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.646127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.646142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.655752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.655976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.655991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.665263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.665490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.665508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.675679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.675895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.675910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.007 [2024-10-01 16:54:12.685694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.007 [2024-10-01 16:54:12.685907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.007 [2024-10-01 16:54:12.685925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.695152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.695362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.695377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.705518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.705704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.705719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.269 4683.00 IOPS, 585.38 MiB/s [2024-10-01 16:54:12.715052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.715169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.715185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.721044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.721102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.721117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.724738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.724786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.724801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.728772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.728821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.728836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.732796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.732843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.732859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.736553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.736608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.736623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.740294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.740347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.743396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.743483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.743498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.746493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.746548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.746564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.749612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.749667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.749682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.752723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.752782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.752797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.755807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.755859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.755874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.758889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.758938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.758953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.761998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.762046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.762061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.765085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.765151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.768161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.768208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.771247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.771297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.774309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.774357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.774372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.777384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.777440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.777455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.269 [2024-10-01 16:54:12.780431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.269 [2024-10-01 16:54:12.780479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.269 [2024-10-01 16:54:12.780494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.783509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.786577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.786630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.786645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.789664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.789726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.789741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.794789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.794836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.794855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.798174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.798277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.798293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.806034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.806098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.811182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.811230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.811246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.814293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.814346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.814361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.817400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.817452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.817468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.820481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.820541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.820556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.823572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.823621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.823636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.826662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.826728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.829737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.829798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.829813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.832811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.832863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.832879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.835893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.835948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.835962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.838986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.839048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.839063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.842080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.842135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.842151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.845143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.845189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.845205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.848211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.848266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.848281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.851253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.851323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.854320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.854372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.854387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.857362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.857410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.860425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.860477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.860492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.863493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.863548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.863563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.866619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.866690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.866705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.872914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.872981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.872997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.876258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.270 [2024-10-01 16:54:12.876322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.270 [2024-10-01 16:54:12.876337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.270 [2024-10-01 16:54:12.879452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.879502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.879517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.882531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.882583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.882598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.885567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.885654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.885672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.888897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.888947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.888962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.894825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.894899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.894914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.898055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.898103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.898119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.902011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.906159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.906206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.906222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.910059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.910105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.910121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.913943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.913998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.918782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.919036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.926290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.926528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.926543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.932997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.933048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.933063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.936860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.936924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.940004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.940064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.940080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.943141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.943197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.943213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.946274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.946329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.946344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.271 [2024-10-01 16:54:12.949390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.271 [2024-10-01 16:54:12.949460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.271 [2024-10-01 16:54:12.949475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.952516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.952564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.952579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.955564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.955617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.955632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.958640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.958690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.958705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.961702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.961751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.961766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.964775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.964850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.964865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.967833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.967883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.967897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.970960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.971020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.971035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.974014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.974068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.977045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.977098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.977113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.980086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.980143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.980158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.983155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.983221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.986204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.986256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.986271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.989314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.989372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.992519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.992565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.992581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:12.997778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:12.997827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:12.997842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.002151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.002217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.002233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.006372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.006426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.006441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.010282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.010342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.010357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.015172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.015277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.015292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.020934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.021041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.021057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.024318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.024406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.533 [2024-10-01 16:54:13.024421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.533 [2024-10-01 16:54:13.027507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.533 [2024-10-01 16:54:13.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.027630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.030989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.031079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.031094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.034408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.034523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.034538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.037723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.037812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.037827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.040816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.040930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.043902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.044008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.044023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.047128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.047224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.047241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.051705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.051779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.051794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.055367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.055456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.061404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.061462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.061478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.068269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.068496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.071504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.071624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.071640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.074637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.074757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.074773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.077812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.077916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.077931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.080957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.081084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.081100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.085104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.085255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.085271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.088817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.088932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.088947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.092280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.092401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.092417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.095919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.096034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.101507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.101747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.106506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.106697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.106712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.110829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.111105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.117036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.117264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.117280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.125841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.126115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.126131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.132744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.132798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.132816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.135837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.135886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.135902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.138948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.139019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.142225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.534 [2024-10-01 16:54:13.142392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.534 [2024-10-01 16:54:13.142408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.534 [2024-10-01 16:54:13.145838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.145927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.148966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.149106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.149122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.152616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.152700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.152715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.155978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.156122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.156137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.159263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.159310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.159329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.162420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.162572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.162587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.167901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.168081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.168097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.172169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.172220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.172236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.175245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.175298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.175314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.178665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.179127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.179143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.182923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.182980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.182995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.186087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.186256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.186272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.192229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.192518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.192534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.199401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.199632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.199647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.535 [2024-10-01 16:54:13.206402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.535 [2024-10-01 16:54:13.206612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.535 [2024-10-01 16:54:13.206627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.214929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.215204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.215220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.221813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.221865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.221880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.229017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.229086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.229101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.232991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.233075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.233090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.238810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.238924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.244583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.244791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.244806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.250494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.250584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.250599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.255927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.256006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.256022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.262098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.262352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.262368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.270065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.270332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.270356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.277462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.796 [2024-10-01 16:54:13.277716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.796 [2024-10-01 16:54:13.277731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.796 [2024-10-01 16:54:13.284725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.284913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.284928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.291698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.291921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.291937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.299228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.299325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.299340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.306894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.307130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.307145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.316343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.316574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.316593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.326400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.326629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.336660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.336915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.336937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.347148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.347384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.347399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.357035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.357292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.357307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.366962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.367232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.367247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.376928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.377162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.377178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.387647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.387889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.387904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.398233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.398502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.398519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.405558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.405613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.405629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.412191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.412401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.412417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.417889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.418008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.418023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.425649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.425905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.425921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.432154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.432233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.432248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.438376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.438618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.438633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.446291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.446507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.446522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.453056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.460777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.460920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.460935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.468274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.468341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.468356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.797 [2024-10-01 16:54:13.476085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:21.797 [2024-10-01 16:54:13.476152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.797 [2024-10-01 16:54:13.476167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.481640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.481719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.481735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.487110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.487165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.487180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.491557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.491771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.491786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.498719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.499013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.499028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.503789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.503984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.504000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.511422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.518620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.518744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.518762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.525042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.525229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.525245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.534366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.534563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.534578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.544141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.544317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.544333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.553426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.553687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.563015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.563244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.563259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.572480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.572738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.572754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.581186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.581342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.581357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.590374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.058 [2024-10-01 16:54:13.590605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.058 [2024-10-01 16:54:13.590621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.058 [2024-10-01 16:54:13.600095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.600328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.600343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.609788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.610037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.610053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.616499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.616572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.616588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.621965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.622154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.622170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.629931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.629994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.630010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.638963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.639079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.639094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.646669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.646833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.646848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.654752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.654982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.664313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.664521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.664536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.674549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.674726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.674742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.685016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.685270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.685285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.695042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.695278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.695294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.059 [2024-10-01 16:54:13.705279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbf0950) with pdu=0x2000198fef90 00:29:22.059 [2024-10-01 16:54:13.705534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.059 [2024-10-01 16:54:13.705557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.059 5321.50 IOPS, 665.19 MiB/s 00:29:22.059 Latency(us) 00:29:22.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.059 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:22.059 nvme0n1 : 2.01 5314.81 664.35 0.00 0.00 3004.42 1354.83 12351.02 00:29:22.059 =================================================================================================================== 00:29:22.059 Total : 5314.81 664.35 0.00 0.00 3004.42 1354.83 12351.02 00:29:22.059 { 00:29:22.059 "results": [ 00:29:22.059 { 00:29:22.059 "job": "nvme0n1", 00:29:22.059 "core_mask": "0x2", 00:29:22.059 "workload": "randwrite", 00:29:22.059 "status": "finished", 00:29:22.059 "queue_depth": 16, 00:29:22.059 "io_size": 131072, 00:29:22.059 "runtime": 2.006281, 00:29:22.059 "iops": 5314.808842829095, 00:29:22.059 "mibps": 664.3511053536369, 00:29:22.059 "io_failed": 0, 00:29:22.059 "io_timeout": 0, 00:29:22.059 "avg_latency_us": 3004.420240226809, 00:29:22.059 "min_latency_us": 1354.8307692307692, 00:29:22.059 "max_latency_us": 12351.015384615384 00:29:22.059 } 00:29:22.059 ], 00:29:22.059 "core_count": 1 00:29:22.059 } 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.320 | .driver_specific 00:29:22.320 | .nvme_error 00:29:22.320 | .status_code 00:29:22.320 | .command_transient_transport_error' 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2853593 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2853593 ']' 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2853593 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2853593 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2853593' 00:29:22.320 killing process with pid 2853593 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2853593 00:29:22.320 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.320 00:29:22.320 Latency(us) 00:29:22.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.320 =================================================================================================================== 00:29:22.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.320 16:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2853593 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2851294 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2851294 ']' 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2851294 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2851294 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2851294' 00:29:22.580 killing process with pid 2851294 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2851294 00:29:22.580 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2851294 00:29:22.840 00:29:22.840 real 0m14.427s 00:29:22.840 user 0m28.590s 00:29:22.840 sys 0m3.409s 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.840 ************************************ 00:29:22.840 END TEST nvmf_digest_error 00:29:22.840 ************************************ 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.840 rmmod nvme_tcp 00:29:22.840 rmmod nvme_fabrics 00:29:22.840 rmmod nvme_keyring 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2851294 ']' 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2851294 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2851294 ']' 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2851294 00:29:22.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2851294) - No such process 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2851294 is not found' 00:29:22.840 Process with pid 2851294 is not found 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.840 16:54:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.380 00:29:25.380 real 0m38.231s 00:29:25.380 user 0m58.958s 00:29:25.380 sys 0m12.326s 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:25.380 ************************************ 00:29:25.380 END TEST nvmf_digest 00:29:25.380 ************************************ 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.380 ************************************ 00:29:25.380 START TEST nvmf_bdevperf 00:29:25.380 ************************************ 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:25.380 * Looking for test storage... 00:29:25.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.380 --rc genhtml_branch_coverage=1 00:29:25.380 --rc genhtml_function_coverage=1 00:29:25.380 --rc genhtml_legend=1 00:29:25.380 --rc geninfo_all_blocks=1 00:29:25.380 --rc geninfo_unexecuted_blocks=1 00:29:25.380 00:29:25.380 ' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.380 --rc genhtml_branch_coverage=1 00:29:25.380 --rc genhtml_function_coverage=1 00:29:25.380 --rc genhtml_legend=1 00:29:25.380 --rc geninfo_all_blocks=1 00:29:25.380 --rc geninfo_unexecuted_blocks=1 00:29:25.380 00:29:25.380 ' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.380 --rc genhtml_branch_coverage=1 00:29:25.380 --rc genhtml_function_coverage=1 00:29:25.380 --rc genhtml_legend=1 00:29:25.380 --rc geninfo_all_blocks=1 00:29:25.380 --rc geninfo_unexecuted_blocks=1 00:29:25.380 00:29:25.380 ' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.380 --rc genhtml_branch_coverage=1 00:29:25.380 --rc genhtml_function_coverage=1 00:29:25.380 --rc genhtml_legend=1 00:29:25.380 --rc geninfo_all_blocks=1 00:29:25.380 --rc geninfo_unexecuted_blocks=1 00:29:25.380 00:29:25.380 ' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:25.380 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.381 16:54:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:31.959 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:31.959 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:31.959 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:31.959 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:31.959 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.960 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:29:32.220 00:29:32.220 --- 10.0.0.2 ping statistics --- 00:29:32.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.220 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:32.220 00:29:32.220 --- 10.0.0.1 ping statistics --- 00:29:32.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.220 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:32.220 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2858160 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2858160 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2858160 ']' 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.480 16:54:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.480 [2024-10-01 16:54:23.971445] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:32.480 [2024-10-01 16:54:23.971494] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.480 [2024-10-01 16:54:24.024560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.480 [2024-10-01 16:54:24.082384] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.480 [2024-10-01 16:54:24.082418] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.480 [2024-10-01 16:54:24.082426] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.480 [2024-10-01 16:54:24.082431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.480 [2024-10-01 16:54:24.082435] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.480 [2024-10-01 16:54:24.082536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.480 [2024-10-01 16:54:24.082677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.480 [2024-10-01 16:54:24.082678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.480 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.480 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:32.480 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:32.739 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.739 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.739 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.739 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.739 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 [2024-10-01 16:54:24.209297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 Malloc0 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 [2024-10-01 16:54:24.271582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:32.740 { 00:29:32.740 "params": { 00:29:32.740 "name": "Nvme$subsystem", 00:29:32.740 "trtype": "$TEST_TRANSPORT", 00:29:32.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.740 "adrfam": "ipv4", 00:29:32.740 "trsvcid": "$NVMF_PORT", 00:29:32.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.740 "hdgst": ${hdgst:-false}, 00:29:32.740 "ddgst": ${ddgst:-false} 00:29:32.740 }, 00:29:32.740 "method": "bdev_nvme_attach_controller" 00:29:32.740 } 00:29:32.740 EOF 00:29:32.740 )") 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:29:32.740 16:54:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:32.740 "params": { 00:29:32.740 "name": "Nvme1", 00:29:32.740 "trtype": "tcp", 00:29:32.740 "traddr": "10.0.0.2", 00:29:32.740 "adrfam": "ipv4", 00:29:32.740 "trsvcid": "4420", 00:29:32.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.740 "hdgst": false, 00:29:32.740 "ddgst": false 00:29:32.740 }, 00:29:32.740 "method": "bdev_nvme_attach_controller" 00:29:32.740 }' 00:29:32.740 [2024-10-01 16:54:24.327319] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:32.740 [2024-10-01 16:54:24.327369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858293 ] 00:29:32.740 [2024-10-01 16:54:24.403940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.999 [2024-10-01 16:54:24.466739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.259 Running I/O for 1 seconds... 00:29:34.198 11498.00 IOPS, 44.91 MiB/s 00:29:34.198 Latency(us) 00:29:34.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.198 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:34.198 Verification LBA range: start 0x0 length 0x4000 00:29:34.198 Nvme1n1 : 1.01 11552.94 45.13 0.00 0.00 11031.35 1436.75 11494.01 00:29:34.198 =================================================================================================================== 00:29:34.198 Total : 11552.94 45.13 0.00 0.00 11031.35 1436.75 11494.01 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2858517 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:34.198 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:34.198 { 00:29:34.198 "params": { 00:29:34.198 "name": "Nvme$subsystem", 00:29:34.198 "trtype": "$TEST_TRANSPORT", 00:29:34.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.198 "adrfam": "ipv4", 00:29:34.198 "trsvcid": "$NVMF_PORT", 00:29:34.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.198 "hdgst": ${hdgst:-false}, 00:29:34.198 "ddgst": ${ddgst:-false} 00:29:34.198 }, 00:29:34.198 "method": "bdev_nvme_attach_controller" 00:29:34.198 } 00:29:34.198 EOF 00:29:34.198 )") 00:29:34.457 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:29:34.457 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:29:34.457 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:29:34.457 16:54:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:34.457 "params": { 00:29:34.457 "name": "Nvme1", 00:29:34.457 "trtype": "tcp", 00:29:34.457 "traddr": "10.0.0.2", 00:29:34.457 "adrfam": "ipv4", 00:29:34.457 "trsvcid": "4420", 00:29:34.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.458 "hdgst": false, 00:29:34.458 "ddgst": false 00:29:34.458 }, 00:29:34.458 "method": "bdev_nvme_attach_controller" 00:29:34.458 }' 00:29:34.458 [2024-10-01 16:54:25.936446] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:34.458 [2024-10-01 16:54:25.936500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858517 ] 00:29:34.458 [2024-10-01 16:54:26.014232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.458 [2024-10-01 16:54:26.074877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.717 Running I/O for 15 seconds... 00:29:37.342 11754.00 IOPS, 45.91 MiB/s 11727.50 IOPS, 45.81 MiB/s 16:54:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2858160 00:29:37.342 16:54:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:37.342 [2024-10-01 16:54:28.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.342 [2024-10-01 16:54:28.889716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.342 [2024-10-01 16:54:28.889736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.342 [2024-10-01 16:54:28.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.342 [2024-10-01 16:54:28.889757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.342 [2024-10-01 16:54:28.889767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.342 [2024-10-01 16:54:28.889776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.342 [2024-10-01 16:54:28.889783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.889932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.889974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.889985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.889993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.890033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.890050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.343 [2024-10-01 16:54:28.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.343 [2024-10-01 16:54:28.890440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.343 [2024-10-01 16:54:28.890446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.344 [2024-10-01 16:54:28.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.890986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.890993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.891002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.891008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.891017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.891024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.891032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.891039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.891047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.344 [2024-10-01 16:54:28.891054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.344 [2024-10-01 16:54:28.891063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.345 [2024-10-01 16:54:28.891649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.345 [2024-10-01 16:54:28.891665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.345 [2024-10-01 16:54:28.891673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.346 [2024-10-01 16:54:28.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd97580 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.891775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.346 [2024-10-01 16:54:28.891781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.346 [2024-10-01 16:54:28.891788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104640 len:8 PRP1 0x0 PRP2 0x0 00:29:37.346 [2024-10-01 16:54:28.891794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.346 [2024-10-01 16:54:28.891831] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd97580 was disconnected and freed. reset controller. 00:29:37.346 [2024-10-01 16:54:28.895096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.895143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.895886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.895902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.895910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.896117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.896319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.896327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.896335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.899561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.908880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.909371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.909409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.909424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.909648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.909852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.909860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.909867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.913105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.922416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.923074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.923111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.923123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.923344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.923547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.923555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.923562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.926795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.935918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.936512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.936550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.936561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.936781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.936992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.937000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.937007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.940238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.949379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.949933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.949953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.949961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.950167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.950367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.950379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.950386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.953616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.962928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.963416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.963433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.963441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.963641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.963841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.963849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.963856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.967087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.976398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.976998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.977040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.977052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.977274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.977478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.977486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.977493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.980737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.346 [2024-10-01 16:54:28.989876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.346 [2024-10-01 16:54:28.990372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.346 [2024-10-01 16:54:28.990393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.346 [2024-10-01 16:54:28.990401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.346 [2024-10-01 16:54:28.990602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.346 [2024-10-01 16:54:28.990802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.346 [2024-10-01 16:54:28.990810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.346 [2024-10-01 16:54:28.990817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.346 [2024-10-01 16:54:28.994051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.643 [2024-10-01 16:54:29.003369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.643 [2024-10-01 16:54:29.003996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.643 [2024-10-01 16:54:29.004042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.643 [2024-10-01 16:54:29.004054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.643 [2024-10-01 16:54:29.004280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.643 [2024-10-01 16:54:29.004484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.643 [2024-10-01 16:54:29.004492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.643 [2024-10-01 16:54:29.004499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.643 [2024-10-01 16:54:29.007749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.643 [2024-10-01 16:54:29.016884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.643 [2024-10-01 16:54:29.017477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.643 [2024-10-01 16:54:29.017526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.643 [2024-10-01 16:54:29.017538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.643 [2024-10-01 16:54:29.017765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.643 [2024-10-01 16:54:29.017983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.643 [2024-10-01 16:54:29.017993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.643 [2024-10-01 16:54:29.018000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.643 [2024-10-01 16:54:29.021244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.643 [2024-10-01 16:54:29.030379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.643 [2024-10-01 16:54:29.030905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.643 [2024-10-01 16:54:29.030928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.643 [2024-10-01 16:54:29.030936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.643 [2024-10-01 16:54:29.031146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.643 [2024-10-01 16:54:29.031348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.031356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.031364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.034599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.043938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.044553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.044610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.044621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.044859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.045079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.045088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.045096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.048348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.057502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.058112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.058173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.058186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.058421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.058628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.058638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.058645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.061911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.071151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.071762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.071790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.071798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.072012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.072216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.072225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.072233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.075478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.084808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.085227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.085254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.085262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.085467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.085668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.085678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.085693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.088966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.098307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.098920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.098990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.099003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.099238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.099445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.099453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.099461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.102962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.111944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.112541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.112570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.112579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.112783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.112996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.113006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.113014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.116260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.125586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.126119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.126144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.126152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.126355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.126556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.126564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.126572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.129831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.139161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.139690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.139725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.139733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.139936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.140150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.140160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.140167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.143428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.152760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.153391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.153451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.153463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.153699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.153906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.153915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.153924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.157193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.166344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.167007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.167069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.167083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.167319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.167526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.644 [2024-10-01 16:54:29.167536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.644 [2024-10-01 16:54:29.167544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.644 [2024-10-01 16:54:29.170809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.644 [2024-10-01 16:54:29.179962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.644 [2024-10-01 16:54:29.180522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.644 [2024-10-01 16:54:29.180550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.644 [2024-10-01 16:54:29.180558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.644 [2024-10-01 16:54:29.180762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.644 [2024-10-01 16:54:29.180983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.180994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.181002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.184246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.193595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.194272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.194333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.194345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.194580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.194787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.194795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.194803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.198076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.207229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.207909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.207987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.207999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.208234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.208440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.208449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.208456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.211707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.220845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.221518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.221578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.221591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.221825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.222047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.222056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.222064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.225321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.234473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.235082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.235144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.235158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.235396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.235603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.235613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.235620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.238889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.248054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.248740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.248800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.248812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.249062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.249270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.249279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.249287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.252538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.261690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.262383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.262443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.262456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.262691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.262898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.262907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.262914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.266182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.275323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.276036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.276097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.276116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.276351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.276557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.276566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.276574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.279835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.288991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.289558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.289619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.289631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.289866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.290087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.290097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.290106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.293357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.302503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.303098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.303159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.303172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.303406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.303613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.303623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.303631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.306897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.645 [2024-10-01 16:54:29.316054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.645 [2024-10-01 16:54:29.316714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.645 [2024-10-01 16:54:29.316774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.645 [2024-10-01 16:54:29.316786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.645 [2024-10-01 16:54:29.317031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.645 [2024-10-01 16:54:29.317239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.645 [2024-10-01 16:54:29.317255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.645 [2024-10-01 16:54:29.317262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.645 [2024-10-01 16:54:29.320512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.329664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.330290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.330351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.330363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.330598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.330804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.330813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.330820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.334089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.343254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.343931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.344001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.344013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.344248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.344455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.344464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.344472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.347732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.356877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.357557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.357617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.357629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.357865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.358086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.358095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.358103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.361355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.370499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.371217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.371278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.371290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.371525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.371731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.371739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.371747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.375015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.383991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.384672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.384733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.384745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.384995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.385202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.385212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.385220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 9802.00 IOPS, 38.29 MiB/s [2024-10-01 16:54:29.390270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.397528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.398003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.398033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.398042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.398247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.398450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.398459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.398467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.401713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.411079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.411668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.411690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.411698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.411909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.412119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.412129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.412137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.415394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.424546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.425090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.425152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.425166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.425401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.425609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.425618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.425627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.428895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.438051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.929 [2024-10-01 16:54:29.438636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.929 [2024-10-01 16:54:29.438665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.929 [2024-10-01 16:54:29.438673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.929 [2024-10-01 16:54:29.438877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.929 [2024-10-01 16:54:29.439087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.929 [2024-10-01 16:54:29.439098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.929 [2024-10-01 16:54:29.439106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.929 [2024-10-01 16:54:29.442352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.929 [2024-10-01 16:54:29.451518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.452083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.452107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.452115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.452319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.452520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.452529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.452545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.455791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.465134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.465681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.465703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.465711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.465913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.466126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.466135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.466142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.469394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.478731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.479309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.479331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.479339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.479542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.479743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.479752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.479759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.483013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.492360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.493014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.493076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.493090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.493326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.493534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.493545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.493553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.496820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.505976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.506620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.506690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.506702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.506940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.507162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.507172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.507180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.510433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.519595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.520143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.520205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.520217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.520453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.520660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.520669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.520677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.523939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.533109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.533784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.533842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.533853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.534097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.534303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.534313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.534321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.537570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.546733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.547420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.547473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.547484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.547714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.547926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.547935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.547943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.551204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.560360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.561013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.561064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.561076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.561307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.561511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.561519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.561527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.564780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.573918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.574497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.574519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.574527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.574729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.930 [2024-10-01 16:54:29.574929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.930 [2024-10-01 16:54:29.574938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.930 [2024-10-01 16:54:29.574945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.930 [2024-10-01 16:54:29.578189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.930 [2024-10-01 16:54:29.587515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.930 [2024-10-01 16:54:29.588054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.930 [2024-10-01 16:54:29.588074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:37.930 [2024-10-01 16:54:29.588082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:37.930 [2024-10-01 16:54:29.588283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:37.931 [2024-10-01 16:54:29.588484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.931 [2024-10-01 16:54:29.588492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.931 [2024-10-01 16:54:29.588499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.192 [2024-10-01 16:54:29.591753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.192 [2024-10-01 16:54:29.601093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.192 [2024-10-01 16:54:29.601686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-10-01 16:54:29.601736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-10-01 16:54:29.601746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.192 [2024-10-01 16:54:29.601985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.192 [2024-10-01 16:54:29.602191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.192 [2024-10-01 16:54:29.602199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.192 [2024-10-01 16:54:29.602207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.192 [2024-10-01 16:54:29.605456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.192 [2024-10-01 16:54:29.614596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.192 [2024-10-01 16:54:29.615257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-10-01 16:54:29.615306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-10-01 16:54:29.615318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.192 [2024-10-01 16:54:29.615545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.192 [2024-10-01 16:54:29.615749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.192 [2024-10-01 16:54:29.615757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.192 [2024-10-01 16:54:29.615764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.192 [2024-10-01 16:54:29.619021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.192 [2024-10-01 16:54:29.628162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.192 [2024-10-01 16:54:29.628727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-10-01 16:54:29.628752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-10-01 16:54:29.628761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.192 [2024-10-01 16:54:29.628964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.192 [2024-10-01 16:54:29.629178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.192 [2024-10-01 16:54:29.629186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.192 [2024-10-01 16:54:29.629193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.192 [2024-10-01 16:54:29.632428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.192 [2024-10-01 16:54:29.641748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.192 [2024-10-01 16:54:29.642221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.192 [2024-10-01 16:54:29.642243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.192 [2024-10-01 16:54:29.642258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.192 [2024-10-01 16:54:29.642460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.642661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.642669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.642676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.646023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.655359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.656031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.656085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.656098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.656328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.656533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.656543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.656551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.659810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.668962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.669515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.669541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.669549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.669753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.669954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.669964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.669981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.673226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.682573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.683146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.683171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.683179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.683381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.683582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.683599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.683608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.686851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.696220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.696782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.696806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.696814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.697025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.697228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.697237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.697245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.700482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.709923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.710353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.710377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.710385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.710588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.710790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.710799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.710806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.714049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.723556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.724137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.724158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.724166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.724367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.724567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.724576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.724583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.727820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.737153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.737688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.737707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.737715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.737916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.738127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.738136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.738144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.741380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.750714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.751380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.751432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.751443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.751672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.751877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.751886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.751893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.755148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.764286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.764957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.765020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.765032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.765264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.765470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.765478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.193 [2024-10-01 16:54:29.765486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.193 [2024-10-01 16:54:29.768733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.193 [2024-10-01 16:54:29.777867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.193 [2024-10-01 16:54:29.778411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.193 [2024-10-01 16:54:29.778434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.193 [2024-10-01 16:54:29.778442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.193 [2024-10-01 16:54:29.778651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.193 [2024-10-01 16:54:29.778851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.193 [2024-10-01 16:54:29.778869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.778876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.782122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.791461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.792084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.792134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.792146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.792375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.792579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.792589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.792597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.795849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.804992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.805646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.805695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.805707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.805935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.806150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.806160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.806168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.809409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.818540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.819060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.819086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.819094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.819297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.819497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.819506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.819523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.822762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.832082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.832624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.832673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.832684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.832912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.833127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.833136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.833144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.836387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.845725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.846282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.846305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.846313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.846515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.846715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.846724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.846731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.849968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.859297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.859714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.859733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.859741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.859941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.860147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.860157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.860165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.194 [2024-10-01 16:54:29.863399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.194 [2024-10-01 16:54:29.872918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.194 [2024-10-01 16:54:29.873456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.194 [2024-10-01 16:54:29.873483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.194 [2024-10-01 16:54:29.873491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.194 [2024-10-01 16:54:29.873692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.194 [2024-10-01 16:54:29.873893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.194 [2024-10-01 16:54:29.873902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.194 [2024-10-01 16:54:29.873909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.455 [2024-10-01 16:54:29.877159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.455 [2024-10-01 16:54:29.886502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.455 [2024-10-01 16:54:29.887065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.455 [2024-10-01 16:54:29.887106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.455 [2024-10-01 16:54:29.887116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.455 [2024-10-01 16:54:29.887336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.455 [2024-10-01 16:54:29.887540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.455 [2024-10-01 16:54:29.887550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.455 [2024-10-01 16:54:29.887558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.455 [2024-10-01 16:54:29.890827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.455 [2024-10-01 16:54:29.899992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.455 [2024-10-01 16:54:29.900628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.455 [2024-10-01 16:54:29.900690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.455 [2024-10-01 16:54:29.900702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.455 [2024-10-01 16:54:29.900938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.455 [2024-10-01 16:54:29.901157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.455 [2024-10-01 16:54:29.901168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.455 [2024-10-01 16:54:29.901176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.455 [2024-10-01 16:54:29.904439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.455 [2024-10-01 16:54:29.913594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.455 [2024-10-01 16:54:29.914068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.455 [2024-10-01 16:54:29.914098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.455 [2024-10-01 16:54:29.914106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.455 [2024-10-01 16:54:29.914310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.455 [2024-10-01 16:54:29.914521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.455 [2024-10-01 16:54:29.914532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.455 [2024-10-01 16:54:29.914539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.455 [2024-10-01 16:54:29.917788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.455 [2024-10-01 16:54:29.927125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.455 [2024-10-01 16:54:29.927678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.455 [2024-10-01 16:54:29.927706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.455 [2024-10-01 16:54:29.927715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.455 [2024-10-01 16:54:29.927917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.928128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.928140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.928147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.931392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:29.940725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:29.941259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:29.941282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:29.941290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:29.941492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.941693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.941702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.941709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.944949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:29.954296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:29.954828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:29.954850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:29.954857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:29.955068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.955271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.955279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.955287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.958535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:29.967773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:29.968405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:29.968466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:29.968478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:29.968715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.968921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.968931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.968938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.972200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:29.981346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:29.981889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:29.981917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:29.981926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:29.982137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.982339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.982349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.982357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.985605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:29.994944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:29.995606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:29.995667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:29.995678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:29.995914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:29.996134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:29.996143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:29.996152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:29.999404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.008443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.009091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.009154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.009175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.009418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.009631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.009641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.009649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.012925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.022070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.022747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.022807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.022820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.023067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.023275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.023285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.023292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.026545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.035687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.036307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.036367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.036380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.036616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.036823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.036832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.036840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.040108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.049269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.049919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.049991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.050004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.050240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.050447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.050463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.050471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.053732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.062883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.063527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.063583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.063595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.063827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.064044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.064055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.064063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.067314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.076463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.456 [2024-10-01 16:54:30.077004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.456 [2024-10-01 16:54:30.077030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.456 [2024-10-01 16:54:30.077039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.456 [2024-10-01 16:54:30.077241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.456 [2024-10-01 16:54:30.077442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.456 [2024-10-01 16:54:30.077451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.456 [2024-10-01 16:54:30.077458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.456 [2024-10-01 16:54:30.080693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.456 [2024-10-01 16:54:30.090018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.457 [2024-10-01 16:54:30.090621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-10-01 16:54:30.090670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-10-01 16:54:30.090682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.457 [2024-10-01 16:54:30.090910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.457 [2024-10-01 16:54:30.091130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.457 [2024-10-01 16:54:30.091140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.457 [2024-10-01 16:54:30.091148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.457 [2024-10-01 16:54:30.094628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.457 [2024-10-01 16:54:30.103599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.457 [2024-10-01 16:54:30.104278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-10-01 16:54:30.104324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-10-01 16:54:30.104335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.457 [2024-10-01 16:54:30.104559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.457 [2024-10-01 16:54:30.104763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.457 [2024-10-01 16:54:30.104772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.457 [2024-10-01 16:54:30.104779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.457 [2024-10-01 16:54:30.108030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.457 [2024-10-01 16:54:30.117167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.457 [2024-10-01 16:54:30.117779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-10-01 16:54:30.117823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-10-01 16:54:30.117834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.457 [2024-10-01 16:54:30.118066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.457 [2024-10-01 16:54:30.118271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.457 [2024-10-01 16:54:30.118280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.457 [2024-10-01 16:54:30.118287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.457 [2024-10-01 16:54:30.121523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.457 [2024-10-01 16:54:30.130656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.457 [2024-10-01 16:54:30.131325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.457 [2024-10-01 16:54:30.131367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.457 [2024-10-01 16:54:30.131378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.457 [2024-10-01 16:54:30.131599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.457 [2024-10-01 16:54:30.131803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.457 [2024-10-01 16:54:30.131812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.457 [2024-10-01 16:54:30.131819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.457 [2024-10-01 16:54:30.135069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.144202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.144836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.144876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.144886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.718 [2024-10-01 16:54:30.145121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.718 [2024-10-01 16:54:30.145326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.718 [2024-10-01 16:54:30.145334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.718 [2024-10-01 16:54:30.145341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.718 [2024-10-01 16:54:30.148584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.157712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.158369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.158408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.158418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.718 [2024-10-01 16:54:30.158639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.718 [2024-10-01 16:54:30.158842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.718 [2024-10-01 16:54:30.158851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.718 [2024-10-01 16:54:30.158859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.718 [2024-10-01 16:54:30.162097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.171223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.171734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.171753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.171760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.718 [2024-10-01 16:54:30.171960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.718 [2024-10-01 16:54:30.172166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.718 [2024-10-01 16:54:30.172175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.718 [2024-10-01 16:54:30.172182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.718 [2024-10-01 16:54:30.175407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.184708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.185311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.185348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.185358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.718 [2024-10-01 16:54:30.185576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.718 [2024-10-01 16:54:30.185779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.718 [2024-10-01 16:54:30.185788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.718 [2024-10-01 16:54:30.185799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.718 [2024-10-01 16:54:30.189038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.198170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.198754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.198790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.198800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.718 [2024-10-01 16:54:30.199026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.718 [2024-10-01 16:54:30.199230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.718 [2024-10-01 16:54:30.199237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.718 [2024-10-01 16:54:30.199245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.718 [2024-10-01 16:54:30.202473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.718 [2024-10-01 16:54:30.211782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.718 [2024-10-01 16:54:30.212420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.718 [2024-10-01 16:54:30.212456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.718 [2024-10-01 16:54:30.212466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.212684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.212886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.212895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.212902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.216137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.225262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.225701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.225719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.225727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.225927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.226134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.226143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.226150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.229377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.238879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.239514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.239554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.239565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.239783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.239994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.240003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.240010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.243239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.252374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.252916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.252934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.252941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.253149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.253349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.253356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.253363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.256585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.265891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.266383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.266399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.266406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.266605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.266805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.266812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.266819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.270049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.279354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.279843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.279858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.279865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.280071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.280274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.280282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.280289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.283511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.292826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.293366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.293382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.293389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.293588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.293787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.293796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.293802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.297031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.306339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.306837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.306852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.306859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.307064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.307263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.307272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.307278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.310499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.319804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.320352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.320367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.320374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.320573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.320772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.320779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.320786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.324017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.333322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.333695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.333712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.333719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.333919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.334126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.334135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.719 [2024-10-01 16:54:30.334142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.719 [2024-10-01 16:54:30.337366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.719 [2024-10-01 16:54:30.346861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.719 [2024-10-01 16:54:30.347339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.719 [2024-10-01 16:54:30.347355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.719 [2024-10-01 16:54:30.347362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.719 [2024-10-01 16:54:30.347561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.719 [2024-10-01 16:54:30.347760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.719 [2024-10-01 16:54:30.347768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.720 [2024-10-01 16:54:30.347774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.720 [2024-10-01 16:54:30.351003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.720 [2024-10-01 16:54:30.360309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.720 [2024-10-01 16:54:30.360795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-10-01 16:54:30.360809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-10-01 16:54:30.360816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.720 [2024-10-01 16:54:30.361021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.720 [2024-10-01 16:54:30.361220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.720 [2024-10-01 16:54:30.361235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.720 [2024-10-01 16:54:30.361242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.720 [2024-10-01 16:54:30.364463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.720 [2024-10-01 16:54:30.373766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.720 [2024-10-01 16:54:30.374246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-10-01 16:54:30.374261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-10-01 16:54:30.374272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.720 [2024-10-01 16:54:30.374471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.720 [2024-10-01 16:54:30.374669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.720 [2024-10-01 16:54:30.374678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.720 [2024-10-01 16:54:30.374685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.720 [2024-10-01 16:54:30.377909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.720 [2024-10-01 16:54:30.387221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.720 [2024-10-01 16:54:30.387633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.720 [2024-10-01 16:54:30.387647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.720 [2024-10-01 16:54:30.387654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.720 [2024-10-01 16:54:30.387854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.720 [2024-10-01 16:54:30.388059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.720 [2024-10-01 16:54:30.388068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.720 [2024-10-01 16:54:30.388075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.720 7351.50 IOPS, 28.72 MiB/s [2024-10-01 16:54:30.393074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.981 [2024-10-01 16:54:30.400678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.981 [2024-10-01 16:54:30.401294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.981 [2024-10-01 16:54:30.401330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.981 [2024-10-01 16:54:30.401340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.981 [2024-10-01 16:54:30.401558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.981 [2024-10-01 16:54:30.401761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.981 [2024-10-01 16:54:30.401769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.981 [2024-10-01 16:54:30.401776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.981 [2024-10-01 16:54:30.405021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.981 [2024-10-01 16:54:30.414152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.981 [2024-10-01 16:54:30.414667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.981 [2024-10-01 16:54:30.414685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.981 [2024-10-01 16:54:30.414693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.981 [2024-10-01 16:54:30.414893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.981 [2024-10-01 16:54:30.415100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.981 [2024-10-01 16:54:30.415113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.981 [2024-10-01 16:54:30.415120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.418348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.427664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.428183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.428217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.428228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.428446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.428649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.428657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.428664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.431895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.441216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.441813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.441849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.441859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.442086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.442290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.442298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.442305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.445535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.454666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.455294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.455330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.455340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.455557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.455760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.455768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.455775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.459015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.468146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.468761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.468797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.468807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.469034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.469238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.469245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.469253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.472482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.481608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.482215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.482250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.482260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.482478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.482681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.482689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.482696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.485934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.495076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.495707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.495743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.495753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.495981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.496185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.496193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.496200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.499428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.508551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.509179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.509215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.509225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.509451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.509654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.509662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.509669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.512903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.522036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.522629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.522664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.522674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.522892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.523105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.523114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.523121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.526350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.535664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.536262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.536298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.536308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.536526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.536729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.536737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.536744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.540021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.549153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.549779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.982 [2024-10-01 16:54:30.549814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.982 [2024-10-01 16:54:30.549824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.982 [2024-10-01 16:54:30.550052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.982 [2024-10-01 16:54:30.550256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.982 [2024-10-01 16:54:30.550264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.982 [2024-10-01 16:54:30.550275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.982 [2024-10-01 16:54:30.553505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.982 [2024-10-01 16:54:30.562627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.982 [2024-10-01 16:54:30.563243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.563279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.563289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.563507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.563709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.563718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.563725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.566959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.576085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.576646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.576682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.576691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.576909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.577122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.577131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.577138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.580367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.589683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.590295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.590331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.590342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.590563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.590766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.590774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.590781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.594026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.603145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.603721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.603756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.603768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.603997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.604201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.604211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.604218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.607445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.616751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.617272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.617290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.617298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.617497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.617696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.617704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.617711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.620934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.630241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.630530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.630547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.630554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.630755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.630955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.630962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.630974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.634198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.643693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.644290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.644325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.644335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.644554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.644761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.644769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.644777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.648020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.983 [2024-10-01 16:54:30.657150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.983 [2024-10-01 16:54:30.657740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.983 [2024-10-01 16:54:30.657776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:38.983 [2024-10-01 16:54:30.657786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:38.983 [2024-10-01 16:54:30.658012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:38.983 [2024-10-01 16:54:30.658216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.983 [2024-10-01 16:54:30.658224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.983 [2024-10-01 16:54:30.658231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.983 [2024-10-01 16:54:30.661459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.244 [2024-10-01 16:54:30.670773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.244 [2024-10-01 16:54:30.671410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-10-01 16:54:30.671445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.244 [2024-10-01 16:54:30.671455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.244 [2024-10-01 16:54:30.671674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.244 [2024-10-01 16:54:30.671876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.244 [2024-10-01 16:54:30.671885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.244 [2024-10-01 16:54:30.671892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.244 [2024-10-01 16:54:30.675125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.244 [2024-10-01 16:54:30.684256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.244 [2024-10-01 16:54:30.684825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-10-01 16:54:30.684860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.244 [2024-10-01 16:54:30.684870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.244 [2024-10-01 16:54:30.685097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.244 [2024-10-01 16:54:30.685301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.244 [2024-10-01 16:54:30.685309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.244 [2024-10-01 16:54:30.685316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.244 [2024-10-01 16:54:30.688551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.244 [2024-10-01 16:54:30.697879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.244 [2024-10-01 16:54:30.698525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-10-01 16:54:30.698560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.244 [2024-10-01 16:54:30.698570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.244 [2024-10-01 16:54:30.698788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.244 [2024-10-01 16:54:30.698999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.244 [2024-10-01 16:54:30.699008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.244 [2024-10-01 16:54:30.699015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.244 [2024-10-01 16:54:30.702246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.244 [2024-10-01 16:54:30.711373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.244 [2024-10-01 16:54:30.711879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-10-01 16:54:30.711898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.244 [2024-10-01 16:54:30.711908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.244 [2024-10-01 16:54:30.712114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.244 [2024-10-01 16:54:30.712314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.244 [2024-10-01 16:54:30.712323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.244 [2024-10-01 16:54:30.712329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.244 [2024-10-01 16:54:30.715561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.244 [2024-10-01 16:54:30.724880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.244 [2024-10-01 16:54:30.725496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.725532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.725543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.725760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.725962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.725980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.725988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.729222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.738442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.739072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.739108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.739124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.739344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.739546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.739556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.739564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.742800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.751942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.752584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.752620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.752630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.752847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.753057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.753066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.753073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.756298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.765417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.765926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.765944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.765952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.766156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.766356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.766363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.766370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.769593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.778896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.779391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.779407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.779414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.779614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.779813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.779824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.779831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.783060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.792378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.792887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.792903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.792910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.793115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.793315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.793323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.793330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.796563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.805873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.806368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.806383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.806390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.806590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.806789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.806796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.806803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.810031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.819340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.819855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.819869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.819876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.820081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.820281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.820288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.820295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.823515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.832825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.833247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.833262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.833269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.833468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.245 [2024-10-01 16:54:30.833667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.245 [2024-10-01 16:54:30.833674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.245 [2024-10-01 16:54:30.833680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.245 [2024-10-01 16:54:30.836903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.245 [2024-10-01 16:54:30.846409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.245 [2024-10-01 16:54:30.846907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-10-01 16:54:30.846922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.245 [2024-10-01 16:54:30.846929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.245 [2024-10-01 16:54:30.847139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.847339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.847347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.847353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.850575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.246 [2024-10-01 16:54:30.859881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.246 [2024-10-01 16:54:30.860372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-10-01 16:54:30.860387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.246 [2024-10-01 16:54:30.860394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.246 [2024-10-01 16:54:30.860593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.860792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.860800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.860806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.864030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.246 [2024-10-01 16:54:30.873335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.246 [2024-10-01 16:54:30.873825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-10-01 16:54:30.873839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.246 [2024-10-01 16:54:30.873846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.246 [2024-10-01 16:54:30.874055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.874255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.874263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.874270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.877490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.246 [2024-10-01 16:54:30.886794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.246 [2024-10-01 16:54:30.887290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-10-01 16:54:30.887305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.246 [2024-10-01 16:54:30.887312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.246 [2024-10-01 16:54:30.887511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.887710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.887717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.887724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.890949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.246 [2024-10-01 16:54:30.900261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.246 [2024-10-01 16:54:30.900782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-10-01 16:54:30.900797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.246 [2024-10-01 16:54:30.900804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.246 [2024-10-01 16:54:30.901009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.901209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.901216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.901223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.904442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.246 [2024-10-01 16:54:30.913741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.246 [2024-10-01 16:54:30.914236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-10-01 16:54:30.914251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.246 [2024-10-01 16:54:30.914258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.246 [2024-10-01 16:54:30.914457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.246 [2024-10-01 16:54:30.914656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.246 [2024-10-01 16:54:30.914663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.246 [2024-10-01 16:54:30.914673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.246 [2024-10-01 16:54:30.917896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.507 [2024-10-01 16:54:30.927210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.507 [2024-10-01 16:54:30.927694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.507 [2024-10-01 16:54:30.927709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.507 [2024-10-01 16:54:30.927716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.507 [2024-10-01 16:54:30.927914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.507 [2024-10-01 16:54:30.928119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.507 [2024-10-01 16:54:30.928128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.507 [2024-10-01 16:54:30.928135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.507 [2024-10-01 16:54:30.931360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.507 [2024-10-01 16:54:30.940670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.507 [2024-10-01 16:54:30.941180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.507 [2024-10-01 16:54:30.941215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.507 [2024-10-01 16:54:30.941225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.507 [2024-10-01 16:54:30.941443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.507 [2024-10-01 16:54:30.941646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.507 [2024-10-01 16:54:30.941655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.507 [2024-10-01 16:54:30.941662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.507 [2024-10-01 16:54:30.944898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:30.954154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:30.954704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:30.954740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:30.954750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:30.954968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:30.955180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:30.955188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:30.955195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:30.958426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:30.967750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:30.968400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:30.968437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:30.968447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:30.968665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:30.968867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:30.968875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:30.968882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:30.972123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:30.981258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:30.981883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:30.981919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:30.981929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:30.982157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:30.982360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:30.982369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:30.982376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:30.985606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:30.994752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:30.995362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:30.995397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:30.995407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:30.995625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:30.995828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:30.995836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:30.995843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:30.999085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.008220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.008820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.008856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.008867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.009098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.009302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.009311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.009318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.012548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.021676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.022269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.022304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.022314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.022532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.022734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.022742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.022749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.025982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.035294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.035794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.035812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.035820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.036027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.036228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.036235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.036242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.039464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.048769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.049348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.049383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.049393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.049611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.049814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.049822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.049829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.053070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.062377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.062925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.062943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.062951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.063156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.063356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.063365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.063372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.066593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.075895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.076272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.076290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.508 [2024-10-01 16:54:31.076298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.508 [2024-10-01 16:54:31.076498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.508 [2024-10-01 16:54:31.076697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.508 [2024-10-01 16:54:31.076704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.508 [2024-10-01 16:54:31.076711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.508 [2024-10-01 16:54:31.079934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.508 [2024-10-01 16:54:31.089429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.508 [2024-10-01 16:54:31.089914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.508 [2024-10-01 16:54:31.089929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.089936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.090140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.090340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.090347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.090354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.093743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.102877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.103372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.103391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.103399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.103598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.103798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.103806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.103812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.107036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.116333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.116821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.116836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.116842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.117047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.117247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.117255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.117261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.120481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.129778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.130247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.130262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.130268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.130468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.130667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.130675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.130682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.133906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.143396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.143884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.143899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.143906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.144110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.144313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.144322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.144328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.147547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.156854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.157323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.157338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.157345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.157544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.157743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.157752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.157758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.160983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.170468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.170941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.170956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.170962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.171166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.171366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.171374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.171381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.174599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.509 [2024-10-01 16:54:31.184089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.509 [2024-10-01 16:54:31.184504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.509 [2024-10-01 16:54:31.184518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.509 [2024-10-01 16:54:31.184524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.509 [2024-10-01 16:54:31.184723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.509 [2024-10-01 16:54:31.184922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.509 [2024-10-01 16:54:31.184930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.509 [2024-10-01 16:54:31.184937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.509 [2024-10-01 16:54:31.188163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.770 [2024-10-01 16:54:31.197670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.770 [2024-10-01 16:54:31.198163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-10-01 16:54:31.198178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.770 [2024-10-01 16:54:31.198185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.770 [2024-10-01 16:54:31.198384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.770 [2024-10-01 16:54:31.198583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.770 [2024-10-01 16:54:31.198591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.770 [2024-10-01 16:54:31.198597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.770 [2024-10-01 16:54:31.201819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.770 [2024-10-01 16:54:31.211120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.770 [2024-10-01 16:54:31.211611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.770 [2024-10-01 16:54:31.211626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.770 [2024-10-01 16:54:31.211633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.770 [2024-10-01 16:54:31.211832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.770 [2024-10-01 16:54:31.212037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.770 [2024-10-01 16:54:31.212046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.770 [2024-10-01 16:54:31.212052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.215273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.224575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.225062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.225077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.225085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.225284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.225483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.225490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.225497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.228716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.238027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.238592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.238628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.238644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.238863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.239073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.239087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.239103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.242334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.251473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.251980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.251999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.252006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.252206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.252405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.252413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.252420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.255646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.264953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.265914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.265936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.265944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.266157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.266358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.266366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.266372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.269596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.278524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.279053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.279069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.279076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.279277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.279476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.279484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.279494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.282717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.292041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.292540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.292555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.292562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.292761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.292960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.292967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.292980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.296213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.305513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.305896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.305911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.305918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.306122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.306321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.306329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.306335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.309614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.319126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.319739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.319774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.319786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.320013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.320217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.320226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.320233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.323460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.332586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.333099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.333134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.333145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.333366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.333569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.333577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.333584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.336816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.771 [2024-10-01 16:54:31.346138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.771 [2024-10-01 16:54:31.346680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.771 [2024-10-01 16:54:31.346697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.771 [2024-10-01 16:54:31.346705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.771 [2024-10-01 16:54:31.346904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.771 [2024-10-01 16:54:31.347115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.771 [2024-10-01 16:54:31.347125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.771 [2024-10-01 16:54:31.347131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.771 [2024-10-01 16:54:31.350357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.359663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.360285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.360321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.360331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.360549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.360752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.360760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.360766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.364003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.373123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.373637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.373655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.373662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.373867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.374073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.374082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.374089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.377311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.386615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.387205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.387222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.387229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.387428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.387627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.387636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.387642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.390867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 5881.20 IOPS, 22.97 MiB/s [2024-10-01 16:54:31.400247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.400630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.400649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.400657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.400858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.401067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.401077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.401084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.404307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.413804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.414326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.414341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.414348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.414547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.414746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.414754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.414764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.417992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.427300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.427801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.427816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.427823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.428027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.428227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.428234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.428241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.431463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.772 [2024-10-01 16:54:31.440777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.772 [2024-10-01 16:54:31.441303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.772 [2024-10-01 16:54:31.441318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:39.772 [2024-10-01 16:54:31.441325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:39.772 [2024-10-01 16:54:31.441525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:39.772 [2024-10-01 16:54:31.441723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.772 [2024-10-01 16:54:31.441732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.772 [2024-10-01 16:54:31.441738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.772 [2024-10-01 16:54:31.444964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.033 [2024-10-01 16:54:31.454289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.033 [2024-10-01 16:54:31.454700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.033 [2024-10-01 16:54:31.454714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.033 [2024-10-01 16:54:31.454721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.033 [2024-10-01 16:54:31.454920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.033 [2024-10-01 16:54:31.455124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.033 [2024-10-01 16:54:31.455132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.033 [2024-10-01 16:54:31.455139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.033 [2024-10-01 16:54:31.458360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.033 [2024-10-01 16:54:31.467855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.033 [2024-10-01 16:54:31.468469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.033 [2024-10-01 16:54:31.468509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.033 [2024-10-01 16:54:31.468520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.033 [2024-10-01 16:54:31.468737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.033 [2024-10-01 16:54:31.468940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.033 [2024-10-01 16:54:31.468948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.033 [2024-10-01 16:54:31.468955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.033 [2024-10-01 16:54:31.472191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.033 [2024-10-01 16:54:31.481310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.033 [2024-10-01 16:54:31.481812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.033 [2024-10-01 16:54:31.481830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.033 [2024-10-01 16:54:31.481837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.033 [2024-10-01 16:54:31.482042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.482242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.482251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.482258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.485479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.494786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.495270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.495285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.495292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.495491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.495690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.495705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.495713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.498945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.508253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.508744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.508759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.508766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.508965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.509174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.509182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.509188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.512411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.521718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.522215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.522231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.522238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.522437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.522636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.522645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.522651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.525871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.535183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.535668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.535682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.535689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.535888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.536092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.536100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.536106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.539328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.548640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.549129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.549144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.549151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.549351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.549549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.549557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.549563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.552790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.562100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.562450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.562466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.562474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.562673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.562872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.562880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.562887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.566149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.575649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.576161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.576176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.576183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.576382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.576581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.576596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.576602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.579822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.589133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.589545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.589559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.589566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.589765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.589963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.589976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.589983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.593203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.602708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.603211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.603227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.603238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.603438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.603637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.034 [2024-10-01 16:54:31.603645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.034 [2024-10-01 16:54:31.603652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.034 [2024-10-01 16:54:31.606875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.034 [2024-10-01 16:54:31.616187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.034 [2024-10-01 16:54:31.616742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.034 [2024-10-01 16:54:31.616778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.034 [2024-10-01 16:54:31.616788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.034 [2024-10-01 16:54:31.617014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.034 [2024-10-01 16:54:31.617217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.617225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.617232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.620460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.629814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.630335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.630354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.630362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.630562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.630762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.630771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.630778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.634006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.643313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.643807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.643822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.643829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.644035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.644235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.644255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.644262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.647495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.656806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.657302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.657318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.657325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.657524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.657723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.657731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.657737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.660962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.670272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.670649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.670666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.670673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.670873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.671078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.671087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.671093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.674316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.683810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.684362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.684398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.684408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.684626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.684829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.684837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.684844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.688080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.697439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.698087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.698123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.698135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.698355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.698558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.698566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.698573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.035 [2024-10-01 16:54:31.701809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.035 [2024-10-01 16:54:31.710939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.035 [2024-10-01 16:54:31.711569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.035 [2024-10-01 16:54:31.711605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.035 [2024-10-01 16:54:31.711615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.035 [2024-10-01 16:54:31.711833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.035 [2024-10-01 16:54:31.712043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.035 [2024-10-01 16:54:31.712053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.035 [2024-10-01 16:54:31.712060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.715288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.724413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.724961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.724985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.724993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.296 [2024-10-01 16:54:31.725193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.296 [2024-10-01 16:54:31.725392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.296 [2024-10-01 16:54:31.725400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.296 [2024-10-01 16:54:31.725406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.728628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.737938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.738444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.738459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.738466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.296 [2024-10-01 16:54:31.738670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.296 [2024-10-01 16:54:31.738869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.296 [2024-10-01 16:54:31.738877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.296 [2024-10-01 16:54:31.738883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.742112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.751428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.751836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.751851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.751858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.296 [2024-10-01 16:54:31.752061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.296 [2024-10-01 16:54:31.752262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.296 [2024-10-01 16:54:31.752270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.296 [2024-10-01 16:54:31.752276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.755503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.765000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.765531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.765567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.765578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.296 [2024-10-01 16:54:31.765797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.296 [2024-10-01 16:54:31.766007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.296 [2024-10-01 16:54:31.766017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.296 [2024-10-01 16:54:31.766024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.769253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.778567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.779198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.779234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.779244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.296 [2024-10-01 16:54:31.779462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.296 [2024-10-01 16:54:31.779665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.296 [2024-10-01 16:54:31.779673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.296 [2024-10-01 16:54:31.779685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.296 [2024-10-01 16:54:31.782916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.296 [2024-10-01 16:54:31.792041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.296 [2024-10-01 16:54:31.792544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.296 [2024-10-01 16:54:31.792562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.296 [2024-10-01 16:54:31.792570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.792769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.792975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.792984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.792991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.796214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.805529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.806076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.806112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.806124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.806343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.806546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.806555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.806562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.809797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.819109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.819740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.819776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.819786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.820010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.820214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.820222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.820229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.823458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.832581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.833095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.833135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.833147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.833367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.833570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.833578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.833585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.836819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.846135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.846648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.846665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.846673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.846872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.847084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.847094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.847100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.850324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.859628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.860269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.860305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.860315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.860532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.860735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.860742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.860749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.863985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.873100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.873572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.873608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.873618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.873836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.874051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.874068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.874075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.877303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2858160 Killed "${NVMF_APP[@]}" "$@" 00:29:40.297 [2024-10-01 16:54:31.886612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:40.297 [2024-10-01 16:54:31.887251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.887287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.887297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.887515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:40.297 [2024-10-01 16:54:31.887718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.887726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.887733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.297 [2024-10-01 16:54:31.890965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2859530 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2859530 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2859530 ']' 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.297 16:54:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.297 [2024-10-01 16:54:31.900103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.900568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.297 [2024-10-01 16:54:31.900586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.297 [2024-10-01 16:54:31.900594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.297 [2024-10-01 16:54:31.900795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.297 [2024-10-01 16:54:31.901007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.297 [2024-10-01 16:54:31.901016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.297 [2024-10-01 16:54:31.901023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.297 [2024-10-01 16:54:31.904250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.297 [2024-10-01 16:54:31.913557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.297 [2024-10-01 16:54:31.914048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.298 [2024-10-01 16:54:31.914065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.298 [2024-10-01 16:54:31.914072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.298 [2024-10-01 16:54:31.914273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.298 [2024-10-01 16:54:31.914472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.298 [2024-10-01 16:54:31.914480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.298 [2024-10-01 16:54:31.914487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.298 [2024-10-01 16:54:31.917707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.298 [2024-10-01 16:54:31.927017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.298 [2024-10-01 16:54:31.927524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.298 [2024-10-01 16:54:31.927539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.298 [2024-10-01 16:54:31.927546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.298 [2024-10-01 16:54:31.927745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.298 [2024-10-01 16:54:31.927944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.298 [2024-10-01 16:54:31.927952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.298 [2024-10-01 16:54:31.927959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.298 [2024-10-01 16:54:31.931185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.298 [2024-10-01 16:54:31.940488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.298 [2024-10-01 16:54:31.941009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.298 [2024-10-01 16:54:31.941045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.298 [2024-10-01 16:54:31.941055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.298 [2024-10-01 16:54:31.941275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.298 [2024-10-01 16:54:31.941477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.298 [2024-10-01 16:54:31.941485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.298 [2024-10-01 16:54:31.941492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.298 [2024-10-01 16:54:31.944732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.298 [2024-10-01 16:54:31.947747] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:29:40.298 [2024-10-01 16:54:31.947789] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.298 [2024-10-01 16:54:31.954054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.298 [2024-10-01 16:54:31.954584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.298 [2024-10-01 16:54:31.954602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.298 [2024-10-01 16:54:31.954610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.298 [2024-10-01 16:54:31.954811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.298 [2024-10-01 16:54:31.955018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.298 [2024-10-01 16:54:31.955026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.298 [2024-10-01 16:54:31.955033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.298 [2024-10-01 16:54:31.958257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.298 [2024-10-01 16:54:31.967566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.298 [2024-10-01 16:54:31.968034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.298 [2024-10-01 16:54:31.968070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.298 [2024-10-01 16:54:31.968081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.298 [2024-10-01 16:54:31.968302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.298 [2024-10-01 16:54:31.968504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.298 [2024-10-01 16:54:31.968513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.298 [2024-10-01 16:54:31.968520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.298 [2024-10-01 16:54:31.971752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.559 [2024-10-01 16:54:31.981176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.559 [2024-10-01 16:54:31.981806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.559 [2024-10-01 16:54:31.981842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.559 [2024-10-01 16:54:31.981852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.559 [2024-10-01 16:54:31.982077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.559 [2024-10-01 16:54:31.982281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.559 [2024-10-01 16:54:31.982290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.559 [2024-10-01 16:54:31.982298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.559 [2024-10-01 16:54:31.985526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.559 [2024-10-01 16:54:31.994653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.559 [2024-10-01 16:54:31.995268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.559 [2024-10-01 16:54:31.995304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.559 [2024-10-01 16:54:31.995315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.559 [2024-10-01 16:54:31.995533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.559 [2024-10-01 16:54:31.995736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.559 [2024-10-01 16:54:31.995744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.559 [2024-10-01 16:54:31.995751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.559 [2024-10-01 16:54:31.999002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.559 [2024-10-01 16:54:32.004303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.559 [2024-10-01 16:54:32.008129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.559 [2024-10-01 16:54:32.008716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.559 [2024-10-01 16:54:32.008752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.559 [2024-10-01 16:54:32.008762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.559 [2024-10-01 16:54:32.008988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.559 [2024-10-01 16:54:32.009192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.559 [2024-10-01 16:54:32.009200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.559 [2024-10-01 16:54:32.009208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.559 [2024-10-01 16:54:32.012437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.559 [2024-10-01 16:54:32.021756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.559 [2024-10-01 16:54:32.022369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.559 [2024-10-01 16:54:32.022405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.559 [2024-10-01 16:54:32.022416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.559 [2024-10-01 16:54:32.022634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.559 [2024-10-01 16:54:32.022837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.559 [2024-10-01 16:54:32.022845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.559 [2024-10-01 16:54:32.022852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.559 [2024-10-01 16:54:32.026089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.559 [2024-10-01 16:54:32.035217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.559 [2024-10-01 16:54:32.035814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.559 [2024-10-01 16:54:32.035852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.035868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.036094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.036298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.036308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.036316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.039544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.048866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.049309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.049327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.049335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.049536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.049736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.049744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.049751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.052979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.058154] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.560 [2024-10-01 16:54:32.058177] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.560 [2024-10-01 16:54:32.058184] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.560 [2024-10-01 16:54:32.058189] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.560 [2024-10-01 16:54:32.058193] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.560 [2024-10-01 16:54:32.058305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.560 [2024-10-01 16:54:32.058441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.560 [2024-10-01 16:54:32.058443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.560 [2024-10-01 16:54:32.062482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.063072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.063109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.063121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.063345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.063547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.063555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.063562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.066806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.075935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.076622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.076661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.076673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.076893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.077103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.077113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.077121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.080349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.089472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.089928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.089948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.089956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.090164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.090365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.090373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.090380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.093800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.102957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.103500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.103519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.103527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.103728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.103927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.103936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.103942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.107169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.116471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.116853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.116870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.116882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.117088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.117289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.117296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.117303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.120525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.130025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.130450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.130465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.130472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.130671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.130870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.130879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.130886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:40.560 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:40.560 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:40.560 [2024-10-01 16:54:32.134113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.560 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.560 [2024-10-01 16:54:32.143609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.144128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.144143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.560 [2024-10-01 16:54:32.144150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.560 [2024-10-01 16:54:32.144349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.560 [2024-10-01 16:54:32.144549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.560 [2024-10-01 16:54:32.144557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.560 [2024-10-01 16:54:32.144563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.560 [2024-10-01 16:54:32.147794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.560 [2024-10-01 16:54:32.157104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.560 [2024-10-01 16:54:32.157608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.560 [2024-10-01 16:54:32.157623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.157634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.157833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.158037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.158047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.158054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 [2024-10-01 16:54:32.161276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 [2024-10-01 16:54:32.170583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 [2024-10-01 16:54:32.171257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.561 [2024-10-01 16:54:32.171293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.171304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.171525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.171727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.171736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.171744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 [2024-10-01 16:54:32.174983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.561 [2024-10-01 16:54:32.180122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.561 [2024-10-01 16:54:32.184105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 [2024-10-01 16:54:32.184738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.561 [2024-10-01 16:54:32.184774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.184784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.185008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.185212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.185221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.185228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 [2024-10-01 16:54:32.188456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 [2024-10-01 16:54:32.197595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 [2024-10-01 16:54:32.198113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.561 [2024-10-01 16:54:32.198135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.198143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.198344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.198554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.198562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.198569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.561 [2024-10-01 16:54:32.201794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 [2024-10-01 16:54:32.211108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 [2024-10-01 16:54:32.211632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.561 [2024-10-01 16:54:32.211648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.211656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.211856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.212061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.212070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.212077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 Malloc0 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.561 [2024-10-01 16:54:32.215298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.561 [2024-10-01 16:54:32.224603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 [2024-10-01 16:54:32.225277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.561 [2024-10-01 16:54:32.225313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd84d60 with addr=10.0.0.2, port=4420 00:29:40.561 [2024-10-01 16:54:32.225324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd84d60 is same with the state(6) to be set 00:29:40.561 [2024-10-01 16:54:32.225543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd84d60 (9): Bad file descriptor 00:29:40.561 [2024-10-01 16:54:32.225750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.561 [2024-10-01 16:54:32.225759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.561 [2024-10-01 16:54:32.225766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.561 [2024-10-01 16:54:32.228999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.561 [2024-10-01 16:54:32.233379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.561 [2024-10-01 16:54:32.238121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.561 16:54:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2858517 00:29:40.820 [2024-10-01 16:54:32.324123] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:50.152 5047.33 IOPS, 19.72 MiB/s 5956.86 IOPS, 23.27 MiB/s 6644.38 IOPS, 25.95 MiB/s 7185.89 IOPS, 28.07 MiB/s 7670.00 IOPS, 29.96 MiB/s 8004.00 IOPS, 31.27 MiB/s 8304.83 IOPS, 32.44 MiB/s 8577.54 IOPS, 33.51 MiB/s 8782.71 IOPS, 34.31 MiB/s 8975.73 IOPS, 35.06 MiB/s 00:29:50.152 Latency(us) 00:29:50.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.152 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:50.152 Verification LBA range: start 0x0 length 0x4000 00:29:50.152 Nvme1n1 : 15.01 8971.64 35.05 9209.38 0.00 7016.46 743.58 13409.67 00:29:50.152 =================================================================================================================== 00:29:50.152 Total : 8971.64 35.05 9209.38 0.00 7016.46 743.58 13409.67 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:50.152 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.153 rmmod nvme_tcp 00:29:50.153 rmmod nvme_fabrics 00:29:50.153 rmmod nvme_keyring 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2859530 ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2859530 ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2859530' 00:29:50.153 killing process with pid 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2859530 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:29:50.153 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:29:50.413 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.413 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.413 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.413 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.413 16:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.322 00:29:52.322 real 0m27.334s 00:29:52.322 user 1m1.790s 00:29:52.322 sys 0m7.213s 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.322 ************************************ 00:29:52.322 END TEST nvmf_bdevperf 00:29:52.322 ************************************ 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.322 ************************************ 00:29:52.322 START TEST nvmf_target_disconnect 00:29:52.322 ************************************ 00:29:52.322 16:54:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:52.583 * Looking for test storage... 00:29:52.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.583 --rc genhtml_branch_coverage=1 00:29:52.583 --rc genhtml_function_coverage=1 00:29:52.583 --rc genhtml_legend=1 00:29:52.583 --rc geninfo_all_blocks=1 00:29:52.583 --rc geninfo_unexecuted_blocks=1 00:29:52.583 00:29:52.583 ' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.583 --rc genhtml_branch_coverage=1 00:29:52.583 --rc genhtml_function_coverage=1 00:29:52.583 --rc genhtml_legend=1 00:29:52.583 --rc geninfo_all_blocks=1 00:29:52.583 --rc geninfo_unexecuted_blocks=1 00:29:52.583 00:29:52.583 ' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.583 --rc genhtml_branch_coverage=1 00:29:52.583 --rc genhtml_function_coverage=1 00:29:52.583 --rc genhtml_legend=1 00:29:52.583 --rc geninfo_all_blocks=1 00:29:52.583 --rc geninfo_unexecuted_blocks=1 00:29:52.583 00:29:52.583 ' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:52.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.583 --rc genhtml_branch_coverage=1 00:29:52.583 --rc genhtml_function_coverage=1 00:29:52.583 --rc genhtml_legend=1 00:29:52.583 --rc geninfo_all_blocks=1 00:29:52.583 --rc geninfo_unexecuted_blocks=1 00:29:52.583 00:29:52.583 ' 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.583 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.584 16:54:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.716 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:00.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:00.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:00.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:00.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:30:00.717 00:30:00.717 --- 10.0.0.2 ping statistics --- 00:30:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.717 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:30:00.717 00:30:00.717 --- 10.0.0.1 ping statistics --- 00:30:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.717 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.717 ************************************ 00:30:00.717 START TEST nvmf_target_disconnect_tc1 00:30:00.717 ************************************ 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.717 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.718 [2024-10-01 16:54:51.543358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.718 [2024-10-01 16:54:51.543447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b5f630 with addr=10.0.0.2, port=4420 00:30:00.718 [2024-10-01 16:54:51.543482] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:00.718 [2024-10-01 16:54:51.543501] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:00.718 [2024-10-01 16:54:51.543510] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:00.718 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:00.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:00.718 Initializing NVMe Controllers 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:00.718 00:30:00.718 real 0m0.128s 00:30:00.718 user 0m0.052s 00:30:00.718 sys 0m0.075s 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 ************************************ 00:30:00.718 END TEST nvmf_target_disconnect_tc1 00:30:00.718 ************************************ 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 ************************************ 00:30:00.718 START TEST nvmf_target_disconnect_tc2 00:30:00.718 ************************************ 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2865019 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2865019 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2865019 ']' 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:00.718 [2024-10-01 16:54:51.707800] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:30:00.718 [2024-10-01 16:54:51.707865] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.718 [2024-10-01 16:54:51.774523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.718 [2024-10-01 16:54:51.840465] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.718 [2024-10-01 16:54:51.840504] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.718 [2024-10-01 16:54:51.840510] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.718 [2024-10-01 16:54:51.840515] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.718 [2024-10-01 16:54:51.840520] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.718 [2024-10-01 16:54:51.840566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:30:00.718 [2024-10-01 16:54:51.840682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:30:00.718 [2024-10-01 16:54:51.840807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:00.718 [2024-10-01 16:54:51.840810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 Malloc0 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 [2024-10-01 16:54:52.002807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 [2024-10-01 16:54:52.043089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2865126 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:00.718 16:54:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.637 16:54:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2865019 00:30:02.637 16:54:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Write completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.637 starting I/O failed 00:30:02.637 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Write completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 Read completed with error (sct=0, sc=8) 00:30:02.638 starting I/O failed 00:30:02.638 [2024-10-01 16:54:54.083613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.638 [2024-10-01 16:54:54.084009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.084040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.084443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.084470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.084775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.084790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.085331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.085360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.085543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.085553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.085844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.085853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.086197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.086206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.086533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.086541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.086782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.086790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.087080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.087089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.087275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.087283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.087544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.087552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.087848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.087856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.088195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.088204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.088400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.088408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.088654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.088661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.088976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.088985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.089340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.089348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.089568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.089575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.089887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.089895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.090192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.090200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.090475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.090483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.090806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.090814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.090995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.091006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.091199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.091208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.091490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.091498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.091816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.091826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.092127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.092137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.092411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.092419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.092713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.638 [2024-10-01 16:54:54.092721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.638 qpair failed and we were unable to recover it. 00:30:02.638 [2024-10-01 16:54:54.093020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.093028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.093339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.093347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.093647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.093656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.093802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.093811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.094023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.094031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.094465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.094473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.094781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.094789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.094952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.094960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.095264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.095272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.095542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.095551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.095832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.095840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.096134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.096143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.096421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.096428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.096545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.096553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.096847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.096855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.097142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.097150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.097464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.097472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.097783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.097792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.097985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.097993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.098326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.098334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.098625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.098633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.098816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.098824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.099092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.099099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.099311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.099318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.099569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.099577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.099873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.099880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.100041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.100050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.100321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.100328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.100567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.100575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.100903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.100910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.101022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.101030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.101333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.101340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.101462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.101470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.101771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.101778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.102084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.102091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.102398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.102405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.102699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.102708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.102986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.102995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.639 qpair failed and we were unable to recover it. 00:30:02.639 [2024-10-01 16:54:54.103286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.639 [2024-10-01 16:54:54.103293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.103574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.103581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.103873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.103879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.104247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.104254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.104557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.104565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.104753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.104760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.105087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.105095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.105396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.105403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.105706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.105713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.105903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.105910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.106221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.106229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.106501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.106509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.106793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.106800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.107094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.107101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.107418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.107426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.107687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.107694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.108008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.108016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.108325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.108332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.108604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.108611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.108695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.108702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.108973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.108980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.109305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.109313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.109602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.109609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.109794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.109801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.110091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.110099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.110394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.110402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.110678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.110685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.110939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.110946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.111217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.111225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.111536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.111543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.111815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.111823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.112138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.112146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.112414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.112421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.112697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.112704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.112980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.640 [2024-10-01 16:54:54.112987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.640 qpair failed and we were unable to recover it. 00:30:02.640 [2024-10-01 16:54:54.113250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.113258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.113543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.113551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.113800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.113807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.114107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.114116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.114464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.114471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.114732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.114739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.114883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.114890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.115226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.115234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.115501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.115508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.115816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.115824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.116161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.116168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.116457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.116465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.116749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.116756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.117134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.117142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.117432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.117439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.117728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.117736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.118028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.118036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.118352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.118359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.118635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.118642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.118920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.118928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.119061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.119069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.119369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.119375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.119689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.119703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.119845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.119854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.120025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.120034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.120195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.120202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.120512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.120519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.120825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.120833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.121145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.121153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.121460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.121468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.121778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.121786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.121976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.121985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.122290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.122298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.122499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.122507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.122824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.122832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.123156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.123165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.123442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.641 [2024-10-01 16:54:54.123450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.641 qpair failed and we were unable to recover it. 00:30:02.641 [2024-10-01 16:54:54.123632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.123641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.123936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.123944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.124325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.124334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.124602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.124610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.124678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.124685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.124851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.124858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.125151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.125160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.125442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.125449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.125681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.125687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.125968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.125977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.126258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.126266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.126555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.126562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.126841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.126848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.127112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.127120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.127416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.127423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.127683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.127690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.127852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.127861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.128122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.128129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.128273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.128281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.128562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.128570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.128848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.128855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.129140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.129148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.129335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.129341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.129549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.129556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.129750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.129757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.129918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.129925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.130212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.130220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.130512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.130520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.130833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.130840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.131024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.131031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.642 qpair failed and we were unable to recover it. 00:30:02.642 [2024-10-01 16:54:54.131332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.642 [2024-10-01 16:54:54.131339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.131621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.131628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.131913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.131920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.132236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.132244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.132413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.132420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.132732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.132739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.133058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.133066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.133415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.133423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.133729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.133736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.134029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.134037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.134184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.134191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.134482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.134488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.134700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.134708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.135000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.135007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.135358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.135366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.135639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.135646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.135921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.135930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.136219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.136226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.136457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.136464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.136747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.136755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.136908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.136916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.137189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.137197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.137473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.137480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.137804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.137811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.138075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.138083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.138408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.138416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.138692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.138699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.138984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.138991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.139206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.139213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.139609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.139616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.139901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.139908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.140221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.140229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.140375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.140383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.140655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.140663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.140935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.140943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.141232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.141240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.643 [2024-10-01 16:54:54.141502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.643 [2024-10-01 16:54:54.141509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.643 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.141799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.141807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.141987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.141996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.142297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.142304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.142569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.142576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.142868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.142876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.143143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.143150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.143317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.143324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.143619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.143627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.143929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.143937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.144218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.144227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.144509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.144517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.144826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.144834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.145172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.145181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.145465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.145473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.145751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.145759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.146033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.146041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.146237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.146243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.146519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.146527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.146732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.146740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.147031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.147040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.147337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.147345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.147676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.147683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.147866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.147873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.148249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.148256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.148528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.148536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.148843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.148850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.149123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.149130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.149441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.149448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.149728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.149736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.150015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.150022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.150381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.150388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.150656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.150663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.150811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.150820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.151060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.151068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.151377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.151385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.151684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.151691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.151965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.151974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.152271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.644 [2024-10-01 16:54:54.152278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.644 qpair failed and we were unable to recover it. 00:30:02.644 [2024-10-01 16:54:54.152548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.152555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.152831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.152838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.153130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.153137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.153294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.153302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.153604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.153611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.153869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.153876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.154137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.154145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.154454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.154461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.154822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.154829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.155162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.155171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.155440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.155447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.155741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.155748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.156026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.156033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.156315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.156322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.156603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.156611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.156888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.156895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.157139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.157146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.157364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.157371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.157617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.157624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.157907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.157914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.158209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.158216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.158488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.158496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.158762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.158769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.159060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.159068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.159421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.159428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.159788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.159796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.160085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.160092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.160381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.160389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.160663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.160670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.160963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.160972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.161299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.161307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.161594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.161602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.161880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.161887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.645 [2024-10-01 16:54:54.162162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.645 [2024-10-01 16:54:54.162169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.645 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.162470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.162478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.162770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.162778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.163066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.163073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.163348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.163355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.163647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.163654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.163876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.163883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.164169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.164176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.164337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.164344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.164703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.164710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.164888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.164896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.165263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.165271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.165542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.165549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.165736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.165744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.166075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.166082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.166375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.166389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.166647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.166654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.166964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.166973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.167247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.167254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.167529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.167537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.167837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.167845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.168122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.168130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.168411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.168417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.168709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.168716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.168987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.168994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.169358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.169364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.169649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.169657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.169926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.169933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.170226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.170233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.170389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.170398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.170580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.170588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.170797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.170804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.171054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.171061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.171443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.171450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.171719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.171726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.172021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.172029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.172224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.172232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.172547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.172553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.172810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.646 [2024-10-01 16:54:54.172817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.646 qpair failed and we were unable to recover it. 00:30:02.646 [2024-10-01 16:54:54.173091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.173098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.173427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.173435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.173726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.173733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.174005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.174013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.174310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.174317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.174609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.174616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.174889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.174896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.175188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.175196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.175494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.175501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.175771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.175778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.176064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.176072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.176372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.176379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.176651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.176657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.176938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.176945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.177218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.177225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.177290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.177297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.177549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.177558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.177732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.177740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.177963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.177974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.178276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.178283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.178560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.178568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.178741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.178749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.179072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.179079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.179369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.179376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.179673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.179680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.179867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.179875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.180180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.180187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.180365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.180372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.180680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.647 [2024-10-01 16:54:54.180687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.647 qpair failed and we were unable to recover it. 00:30:02.647 [2024-10-01 16:54:54.180867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.180874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.181201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.181208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.181482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.181489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.181757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.181764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.181960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.181968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.182229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.182237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.182515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.182522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.182777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.182784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.182908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.182923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.183205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.183212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.183503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.183517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.183845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.183852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.184107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.184115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.184320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.184328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.184561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.184568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.184866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.184882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.185169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.185176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.185443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.185450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.185615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.185623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.185796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.185803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.186078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.186085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.186448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.186455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.186711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.186718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.186992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.187000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.187291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.187298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.187584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.187591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.187866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.187873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.188130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.188140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.188488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.188495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.188650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.188656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.188922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.188929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.189180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.189187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.189444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.189450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.189726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.189733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.190013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.190021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.190318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.190325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.190613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.190620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.190914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.190921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.648 qpair failed and we were unable to recover it. 00:30:02.648 [2024-10-01 16:54:54.191076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.648 [2024-10-01 16:54:54.191083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.191285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.191292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.191650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.191657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.191927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.191934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.192094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.192102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.192386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.192393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.192688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.192696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.193007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.193015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.193320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.193328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.193599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.193606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.193877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.193884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.194177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.194184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.194448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.194456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.194749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.194757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.195057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.195065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.195385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.195392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.195674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.195682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.195835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.195842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.196175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.196183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.196444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.196451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.196747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.196754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.197057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.197065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.197362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.197368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.197659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.197666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.197872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.197880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.198152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.198159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.198454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.198462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.198741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.198748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.199022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.199029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.199307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.199316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.199603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.199610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.199887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.199894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.200190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.200198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.200471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.200478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.200749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.200756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.201026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.201033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.201374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.201381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.201675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.201683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.201906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.649 [2024-10-01 16:54:54.201914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.649 qpair failed and we were unable to recover it. 00:30:02.649 [2024-10-01 16:54:54.202192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.202200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.202474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.202480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.202772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.202779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.203050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.203057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.203415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.203423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.203701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.203708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.203900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.203907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.204186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.204193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.204479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.204486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.204680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.204687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.204953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.204960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.205238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.205246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.205531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.205538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.205846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.205854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.206126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.206133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.206406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.206413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.206692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.206698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.206978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.206985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.207271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.207279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.207547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.207555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.207828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.207835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.208097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.208104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.208387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.208394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.208642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.208649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.208838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.208846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.209115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.209122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.209408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.209415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.209747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.209755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.209904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.209912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.210176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.210185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.210469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.210478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.210784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.210791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.211090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.211097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.211395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.211402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.211678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.211686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.211980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.211988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.212261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.212268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.212538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.650 [2024-10-01 16:54:54.212545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.650 qpair failed and we were unable to recover it. 00:30:02.650 [2024-10-01 16:54:54.212847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.212853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.213142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.213150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.213504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.213513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.213796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.213804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.214075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.214083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.214358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.214365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.214540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.214548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.214885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.214891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.215179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.215187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.215489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.215496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.215811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.215818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.216085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.216092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.216396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.216403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.216595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.216602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.216868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.216876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.217060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.217067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.217362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.217369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.217638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.217645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.217936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.217944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.218115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.218123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.218413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.218420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.218713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.218721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.219016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.219024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.219312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.219319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.219611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.219619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.219923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.220202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.220210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.220502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.220509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.220801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.220808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.221136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.221143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.221416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.221423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.221693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.221701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.222041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.222050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.222354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.222361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.651 [2024-10-01 16:54:54.222671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.651 [2024-10-01 16:54:54.222678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.651 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.222865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.222871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.223223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.223230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.223502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.223509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.223819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.223827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.224068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.224075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.224390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.224397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.224668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.224675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.224960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.224967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.225270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.225277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.225629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.225637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.225898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.225906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.226184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.226191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.226554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.226562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.226854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.226861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.227194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.227202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.227483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.227491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.227761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.227768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.227953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.227961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.228265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.228274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.228538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.228546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.228708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.228717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.228911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.228919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.229135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.229144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.229421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.229429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.229691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.229699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.229998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.230006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.230210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.230218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.230513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.230520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.230810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.230817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.231104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.231111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.231385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.231392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.231684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.231692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.231876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.231892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.231974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.231982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.232228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.232236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.232536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.232544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.232700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.232708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.232977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.652 [2024-10-01 16:54:54.232987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.652 qpair failed and we were unable to recover it. 00:30:02.652 [2024-10-01 16:54:54.233272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.233280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.233457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.233465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.233734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.233742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.234028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.234042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.234317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.234325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.234501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.234509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.234692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.234700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.234901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.234909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.235198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.235206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.235502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.235510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.235815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.235823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.236119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.236127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.236418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.236426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.236760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.236769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.237040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.237048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.237204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.237212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.237531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.237539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.237699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.237708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.237941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.237949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.238256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.238265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.238537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.238546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.238689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.238697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.238966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.238977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.239295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.239303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.239595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.239603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.239793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.239800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.240045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.240053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.240333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.240341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.240618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.240626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.240940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.240949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.241248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.241256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.241534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.241541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.241833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.241841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.242128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.242136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.242412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.242419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.242709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.242717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.242979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.242988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.243268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.243276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.653 [2024-10-01 16:54:54.243546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.653 [2024-10-01 16:54:54.243554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.653 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.243726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.243736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.243917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.243924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.244187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.244194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.244487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.244494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.244817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.244825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.245045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.245053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.245328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.245335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.245624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.245631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.245899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.245906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.246201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.246209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.246275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.246283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.246542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.246549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.246829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.246836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.247150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.247158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.247463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.247471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.247727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.247735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.248033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.248040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.248306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.248313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.248618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.248626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.248931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.248938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.249246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.249253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.249522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.249529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.249823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.249829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.250118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.250126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.250397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.250404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.250695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.250702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.251028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.251035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.251195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.251203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.251502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.251509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.251788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.251795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.252150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.252158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.252416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.252423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.252721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.252728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.253012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.253020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.253306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.253313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.253569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.253576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.253864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.253871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.254180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.654 [2024-10-01 16:54:54.254189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.654 qpair failed and we were unable to recover it. 00:30:02.654 [2024-10-01 16:54:54.254455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.254463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.254732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.254740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.255060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.255070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.255335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.255342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.255615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.255623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.255800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.255807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.256080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.256087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.256352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.256359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.256699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.256707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.256975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.256982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.257266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.257273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.257566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.257573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.257844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.257851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.258128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.258136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.258429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.258437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.258708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.258715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.258985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.258994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.259278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.259285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.259558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.259565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.259741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.259750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.259956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.259963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.260228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.260235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.260507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.260514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.260775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.260782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.260940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.260948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.261215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.261222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.261495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.261502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.261814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.261821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.262146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.262154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.262446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.262453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.262732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.262739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.263103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.263110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 [2024-10-01 16:54:54.263269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.655 [2024-10-01 16:54:54.263276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.655 qpair failed and we were unable to recover it. 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Write completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Write completed with error (sct=0, sc=8) 00:30:02.655 starting I/O failed 00:30:02.655 Read completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Read completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Read completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Write completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Read completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 Read completed with error (sct=0, sc=8) 00:30:02.656 starting I/O failed 00:30:02.656 [2024-10-01 16:54:54.264013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.656 [2024-10-01 16:54:54.264434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.264479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.264811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.264842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.265191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.265200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.265495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.265502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.265782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.265789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.265986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.265993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.266274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.266281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.266589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.266596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.266747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.266755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.267024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.267032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.267293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.267300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.267592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.267599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.267792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.267800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.268100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.268107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.268393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.268401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.268671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.268678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.268958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.268965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.269251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.269258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.269428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.269435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.269756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.269763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.270077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.270085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.270379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.270386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.270654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.270661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.270971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.270978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.271161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.271168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.271436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.271444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.271770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.271778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.272067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.272075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.272371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.272379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.272671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.656 [2024-10-01 16:54:54.272681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.656 qpair failed and we were unable to recover it. 00:30:02.656 [2024-10-01 16:54:54.272943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.272950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.273240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.273248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.273528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.273536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.273725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.273732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.274029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.274036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.274318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.274324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.274611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.274618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.274799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.274807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.274989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.274998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.275265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.275272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.275547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.275554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.275886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.275893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.276260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.276268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.276557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.276564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.276786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.276793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.277088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.277095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.277293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.277300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.277678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.277685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.277989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.277997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.278291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.278299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.278593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.278600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.278770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.278776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.279057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.279065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.279341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.279348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.279629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.279637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.279798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.279812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.280004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.280012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.280320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.280328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.280639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.280646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.280938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.280945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.281255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.281262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.281530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.281538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.281719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.281727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.281911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.281919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.282190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.282198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.282473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.282480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.282827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.282835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.283020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.283028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.657 qpair failed and we were unable to recover it. 00:30:02.657 [2024-10-01 16:54:54.283306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.657 [2024-10-01 16:54:54.283313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.283463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.283473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.283780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.283787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.284110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.284117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.284401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.284408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.284723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.284731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.285024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.285031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.285214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.285221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.285565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.285573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.285855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.285863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.286161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.286168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.286521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.286530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.286827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.286834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.287109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.287117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.287268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.287275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.287574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.287581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.287735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.287743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.288009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.288017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.288291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.288298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.288575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.288583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.288856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.288863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.289129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.289137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.289410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.289416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.289770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.289777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.289951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.289959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.290272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.290279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.290571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.290579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.290886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.290894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.291183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.291191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.291424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.291430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.291720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.291728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.291913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.291921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.292132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.292141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.292426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.292434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.292705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.292713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.292867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.292876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.293173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.293182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.293490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.293498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.293769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.658 [2024-10-01 16:54:54.293777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.658 qpair failed and we were unable to recover it. 00:30:02.658 [2024-10-01 16:54:54.294068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.294075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.294350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.294357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.294648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.294658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.294915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.294922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.295217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.295225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.295494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.295501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.295797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.295804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.296077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.296084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.296350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.296357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.296663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.296670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.296967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.296976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.297187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.297195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.297487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.297495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.297670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.297678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.297984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.297992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.298257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.298264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.298556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.298563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.298921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.298928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.299062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.299070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.299384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.299391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.299663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.299670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.299843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.299850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.300163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.300170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.300481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.300487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.300780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.300788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.301060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.301067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.301423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.301431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.301729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.301737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.302014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.302021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.302290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.302297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.302473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.302481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.302814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.302821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.303097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.303104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.303397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.303404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.303766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.303775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.304054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.304061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.304356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.304363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.304669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.304676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.304950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.659 [2024-10-01 16:54:54.304957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.659 qpair failed and we were unable to recover it. 00:30:02.659 [2024-10-01 16:54:54.305253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.305261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.305533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.305541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.305847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.305854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.306250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.306260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.306510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.306517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.306675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.306682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.306883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.306890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.307170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.307178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.307461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.307468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.307743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.307751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.308033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.308040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.308328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.308335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.308551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.308559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.308860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.308867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.660 [2024-10-01 16:54:54.309165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.660 [2024-10-01 16:54:54.309173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.660 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.309468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.309476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.309626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.309634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.309926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.309933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.310209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.310216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.310402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.310409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.934 qpair failed and we were unable to recover it. 00:30:02.934 [2024-10-01 16:54:54.311136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.934 [2024-10-01 16:54:54.311153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.311418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.311426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.311743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.311751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.312042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.312050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.312355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.312362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.312553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.312560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.312859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.312866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.313166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.313174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.313356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.313363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.313655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.313662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.313956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.313964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.314160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.314167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.314478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.314485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.314758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.314773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.314985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.314993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.315312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.315319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.315610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.315618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.315927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.315934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.316167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.316175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.316444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.316451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.316752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.316759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.317072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.317080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.317264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.317271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.317537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.317546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.317837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.317845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.318118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.318125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.318418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.318425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.318702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.318709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.319007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.319015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.319315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.319322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.319614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.319621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.319894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.319901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.320177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.320185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.320461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.320468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.320738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.320745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.321025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.321032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.321224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.321231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.321510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.935 [2024-10-01 16:54:54.321518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.935 qpair failed and we were unable to recover it. 00:30:02.935 [2024-10-01 16:54:54.321669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.321676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.321975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.321983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.322183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.322190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.322493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.322500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.322770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.322777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.323077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.323084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.323375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.323382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.323655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.323662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.323956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.323963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.324250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.324257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.324513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.324520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.324781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.324788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.324973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.324981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.325262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.325269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.325561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.325568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.325848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.325856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.326142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.326150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.326434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.326442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.326733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.326739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.327010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.327018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.327291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.327299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.327564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.327572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.327785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.327794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.328065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.328073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.328385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.328392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.328704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.328712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.328977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.328984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.329138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.329146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.329440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.329447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.329726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.329740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.330026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.330033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.330318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.330331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.330619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.330626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.330776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.330784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.331059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.331067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.331375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.331381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.331732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.331741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.332075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.332083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.332356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.936 [2024-10-01 16:54:54.332363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.936 qpair failed and we were unable to recover it. 00:30:02.936 [2024-10-01 16:54:54.332641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.332648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.332830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.332837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.333185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.333192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.333476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.333483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.333778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.333785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.334039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.334047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.334366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.334373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.334676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.334683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.334961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.334968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.335252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.335259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.335535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.335542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.335856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.335865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.336082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.336090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.336296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.336303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.336630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.336636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.336906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.336913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.337195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.337202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.337367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.337375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.337733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.337741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.337928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.337935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.338223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.338230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.338531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.338538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.338729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.338736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.338931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.338938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.339217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.339224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.339523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.339530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.339797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.339807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.340072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.340080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.340364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.340371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.340578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.340585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.340795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.340802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.341050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.341058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.341400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.341408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.341705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.341712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.341994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.342001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.342181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.342189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.342449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.342456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.342635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.342643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.937 qpair failed and we were unable to recover it. 00:30:02.937 [2024-10-01 16:54:54.343008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.937 [2024-10-01 16:54:54.343016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.343290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.343297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.343660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.343667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.343948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.343955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.344275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.344283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.344585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.344593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.344888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.344897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.345188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.345196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.345512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.345521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.345812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.345820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.346129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.346139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.346324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.346332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.346622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.346629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.346923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.346931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.347219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.347227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.347505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.347513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.347854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.347863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.348150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.348159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.348470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.348479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.348631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.348640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.348831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.348839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.349108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.349117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.349290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.349300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.349455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.349465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.349627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.349636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.349929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.349937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.350255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.350264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.350556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.350565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.350860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.350870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.351140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.351149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.351426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.351434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.351616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.351625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.351975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.351984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.352250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.352258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.352448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.352457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.352764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.352773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.352975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.352983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.353278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.353287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.353460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.938 [2024-10-01 16:54:54.353469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.938 qpair failed and we were unable to recover it. 00:30:02.938 [2024-10-01 16:54:54.353830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.353839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.354151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.354159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.354468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.354476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.354764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.354773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.355051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.355059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.355332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.355340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.355623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.355632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.355820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.355828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.356074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.356083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.356391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.356399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.356579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.356588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.356814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.356823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.357125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.357133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.357280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.357288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.357562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.357571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.357890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.357900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.358264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.358275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.358562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.358569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.358831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.358840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.359128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.359136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.359469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.359477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.359641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.359649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.359951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.359959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.360139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.360147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.360395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.360403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.360682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.360691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.360869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.360876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.361187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.361195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.361372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.361380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.361653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.361662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.361964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.361974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.362166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.362174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.362355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.939 [2024-10-01 16:54:54.362364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.939 qpair failed and we were unable to recover it. 00:30:02.939 [2024-10-01 16:54:54.362669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.362678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.362965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.362977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.363254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.363264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.363574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.363583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.363857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.363864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.364174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.364184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.364452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.364461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.364744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.364753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.365015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.365023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.365321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.365329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.365498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.365506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.365761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.365770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.365947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.365955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.366227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.366236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.366529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.366538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.366712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.366721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.367020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.367029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.367289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.367297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.367572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.367580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.367869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.367877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.368149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.368157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.368461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.368469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.368833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.368841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.369130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.369139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.369415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.369423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.369716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.369723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.369892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.369901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.370204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.370212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.370413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.370422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.370711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.370720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.370990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.370998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.371303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.371312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.371620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.371630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.371923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.371931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.372118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.372126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.372251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.372259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.372434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.372443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.372728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.372736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.373028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.940 [2024-10-01 16:54:54.373036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.940 qpair failed and we were unable to recover it. 00:30:02.940 [2024-10-01 16:54:54.373241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.373249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.373533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.373542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.373709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.373718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.374007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.374016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.374227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.374236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.374544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.374553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.374863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.374871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.375068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.375076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.375263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.375272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.375541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.375550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.375784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.375792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.375897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.375905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.376080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.376088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.376250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.376259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.376570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.376580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.376891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.376900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.377076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.377085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.377346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.377356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.377644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.377654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.377945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.377954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.378121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.378130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.378396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.378405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.378583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.378592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.378763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.378772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.379031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.379040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.379182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.379190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.379464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.379473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.379759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.379767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.379936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.379945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.380247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.380255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.380517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.380525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.380706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.380715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.380961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.380973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.381245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.381253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.381561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.381570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.381876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.381886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.382173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.382181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.382484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.382494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.941 [2024-10-01 16:54:54.382785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.941 [2024-10-01 16:54:54.382794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.941 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.383102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.383111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.383408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.383418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.383691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.383701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.383994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.384003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.384291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.384300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.384599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.384607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.384911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.384919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.385220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.385229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.385488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.385496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.385792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.385801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.386085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.386093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.386201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.386208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.386555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.386564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.386874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.386883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.387168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.387178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.387368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.387376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.387657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.387665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.387955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.387964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.388231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.388240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.388437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.388446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.388610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.388618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.388772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.388780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.388966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.388977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.389276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.389285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.389557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.389565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.389830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.389839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.390151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.390160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.390322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.390331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.390609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.390618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.390906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.390915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.391177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.391186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.391471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.391479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.391748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.391756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.392031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.392040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.392357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.392367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.392412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.392420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.392731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.392741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.393014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.393022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.942 qpair failed and we were unable to recover it. 00:30:02.942 [2024-10-01 16:54:54.393299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.942 [2024-10-01 16:54:54.393307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.393481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.393490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.393643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.393652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.393815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.393824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.394133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.394142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.394478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.394487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.394776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.394785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.394996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.395005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.395190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.395198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.395379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.395386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.395560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.395568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.395929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.395937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.396223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.396233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.396416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.396426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.396749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.396757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.397072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.397081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.397260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.397269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.397562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.397570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.397844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.397853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.398033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.398043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.398340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.398348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.398618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.398626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.398919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.398928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.399215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.399223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.399388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.399396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.399592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.399600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.399862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.399871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.400046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.400058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.400233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.400242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.400566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.400575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.400866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.400874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.401164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.401172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.401461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.401468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.401742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.401749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.943 qpair failed and we were unable to recover it. 00:30:02.943 [2024-10-01 16:54:54.401811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.943 [2024-10-01 16:54:54.401819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.402081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.402090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.402420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.402427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.402725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.402734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.402919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.402927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.403192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.403201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.403519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.403528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.403707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.403716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.403884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.403892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.404204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.404213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.404517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.404527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.404817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.404826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.405087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.405095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.405372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.405379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.405698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.405706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.406005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.406014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.406310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.406319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.406611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.406619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.406893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.406900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.407090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.407099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.407265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.407275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.407538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.407547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.407826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.407834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.408125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.408133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.408422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.408430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.408582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.408591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.408935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.408946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.409124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.409133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.409401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.409409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.409710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.409718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.409893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.409903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.410166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.410175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.410345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.410353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.410678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.410689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.410978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.410987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.411278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.411287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.411576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.411584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.411912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.411921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.412195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.944 [2024-10-01 16:54:54.412204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.944 qpair failed and we were unable to recover it. 00:30:02.944 [2024-10-01 16:54:54.412495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.412503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.412829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.412838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.413128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.413136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.413330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.413337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.413597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.413606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.413933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.413942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.414237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.414246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.414518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.414526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.414817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.414826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.415093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.415103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.415399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.415407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.415631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.415639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.415953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.415962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.416241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.416251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.416437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.416447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.416702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.416711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.417014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.417023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.417314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.417322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.417486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.417495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.417809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.417818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.418137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.418146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.418430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.418447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.418733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.418741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.419066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.419074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.419373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.419382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.419691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.419701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.420019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.420027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.420201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.420209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.420483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.420492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.420780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.420788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.421080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.421089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.421380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.421388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.421535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.421542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.421722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.421731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.422019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.422030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.422210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.422218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.422530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.422539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.422810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.422818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.945 [2024-10-01 16:54:54.423117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.945 [2024-10-01 16:54:54.423125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.945 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.423284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.423292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.423470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.423479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.423740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.423749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.424045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.424053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.424364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.424372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.424667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.424675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.424856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.424866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.425158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.425168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.425267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.425276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.425536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.425546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.425859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.425869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.426089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.426098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.426401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.426409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.426595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.426604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.426786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.426795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.427058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.427067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.427410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.427418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.427704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.427713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.427895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.427903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.428066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.428074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.428370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.428378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.428652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.428660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.428818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.428827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.429120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.429129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.429407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.429416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.429717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.429725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.429909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.429916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.430194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.430203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.430496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.430504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.430775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.430783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.430963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.430974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.431149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.431157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.431454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.431462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.431752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.431761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.432039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.432048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.432351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.432361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.432655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.432664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.432964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.432975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.946 qpair failed and we were unable to recover it. 00:30:02.946 [2024-10-01 16:54:54.433269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.946 [2024-10-01 16:54:54.433277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.433579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.433589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.433895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.433903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.434202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.434212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.434522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.434531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.434808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.434816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.435102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.435112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.435398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.435407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.435692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.435701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.435932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.435940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.436233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.436242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.436531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.436540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.436807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.436817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.437008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.437017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.437303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.437311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.437582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.437590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.437887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.437895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.438181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.438189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.438383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.438391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.438680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.438697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.438885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.438893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.439177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.439185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.439507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.439515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.439776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.439784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.440062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.440071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.440258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.440265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.440550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.440559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.440867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.440876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.441146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.441155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.441326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.441334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.441605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.441615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.441898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.441907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.442222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.442231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.442526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.442536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.442843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.442851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.443120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.443129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.443442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.443450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.443606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.443617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.443958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.947 [2024-10-01 16:54:54.443966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.947 qpair failed and we were unable to recover it. 00:30:02.947 [2024-10-01 16:54:54.444262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.444271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.444402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.444412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.444621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.444629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.444805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.444813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.445124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.445133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.445399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.445407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.445658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.445667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.445976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.445986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.446260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.446268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.446565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.446574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.446871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.446879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.447175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.447183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.447477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.447485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.447807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.447816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.448125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.448133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.448401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.448410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.448701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.448711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.449023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.449032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.449195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.449204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.449504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.449513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.449806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.449814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.450095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.450103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.450415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.450424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.450733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.450741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.451019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.451028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.451305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.451313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.451591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.451599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.451903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.451912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.452101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.452111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.452407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.452416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.452540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.452548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.452762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.452770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.453001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.453010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.453165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.453173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.948 [2024-10-01 16:54:54.453480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.948 [2024-10-01 16:54:54.453488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.948 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.453796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.453804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.454109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.454117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.454394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.454403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.454696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.454707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.455014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.455023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.455287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.455295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.455564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.455573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.455866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.455874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.456171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.456180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.456469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.456477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.456761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.456770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.457074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.457082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.457355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.457363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.457683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.457691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.457956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.457965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.458239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.458247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.458531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.458540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.458799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.458807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.459113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.459123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.459413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.459421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.459721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.459730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.459989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.459998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.460280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.460289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.460577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.460585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.460857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.460865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.461169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.461177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.461451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.461459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.461754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.461762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.461930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.461938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.462197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.462206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.462475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.462484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.462783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.462792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.463125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.463133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.463424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.463433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.463700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.463709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.463990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.464000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.464175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.464183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.464447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.464455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.949 qpair failed and we were unable to recover it. 00:30:02.949 [2024-10-01 16:54:54.464760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.949 [2024-10-01 16:54:54.464768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.465075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.465085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.465392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.465400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.465715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.465724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.466052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.466060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.466329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.466339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.466606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.466614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.466898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.466907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.467249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.467257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.467538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.467547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.467813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.467822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.468115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.468132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.468438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.468447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.468745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.468754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.469060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.469068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.469357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.469366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.469626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.469634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.469929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.469938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.470216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.470225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.470518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.470527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.470836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.470844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.471134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.471143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.471289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.471298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.471585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.471593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.471790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.471799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.472090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.472098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.472405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.472414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.472704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.472713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.473027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.473036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.473329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.473337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.473610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.473618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.473913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.473922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.474204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.474213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.474483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.474491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.474767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.474776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.475042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.475051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.475354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.475363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.475669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.475677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.475947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.950 [2024-10-01 16:54:54.475956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.950 qpair failed and we were unable to recover it. 00:30:02.950 [2024-10-01 16:54:54.476106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.476116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.476431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.476440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.476594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.476602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.476904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.476912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.477201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.477210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.477367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.477376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.477683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.477694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.477972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.477980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.478259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.478268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.478536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.478544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.478746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.478754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.479042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.479051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.479347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.479356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.479633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.479643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.479938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.479946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.480223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.480232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.480587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.480596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.480865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.480874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.481098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.481108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.481351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.481360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.481652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.481661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.481974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.481984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.482131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.482140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.482419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.482427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.482692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.482700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.482985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.482994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.483302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.483310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.483605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.483613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.483903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.483911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.484191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.484200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.484488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.484497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.484762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.484771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.485060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.485070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.485340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.485349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.485634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.485643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.485980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.485988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.486288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.486297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.486489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.486497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.951 [2024-10-01 16:54:54.486779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.951 [2024-10-01 16:54:54.486788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.951 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.487097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.487106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.487392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.487401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.487707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.487715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.488003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.488012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.488289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.488298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.488647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.488656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.488937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.488946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.489240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.489248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.489405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.489413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.489718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.489728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.490042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.490050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.490353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.490361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.490621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.490631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.490929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.490937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.491286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.491294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.491581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.491589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.491766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.491775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.492027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.492036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.492317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.492325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.492621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.492631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.492935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.492943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.493233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.493243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.493409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.493417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.493658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.493666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.493842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.493851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.494130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.494139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.494364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.494372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.494671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.494679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.494826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.494834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.495123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.495131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.495407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.495414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.495673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.495682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.495980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.495988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.496258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.952 [2024-10-01 16:54:54.496266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.952 qpair failed and we were unable to recover it. 00:30:02.952 [2024-10-01 16:54:54.496502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.496512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.496805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.496813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.497132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.497141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.497436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.497444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.497801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.497811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.498118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.498126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.498409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.498418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.498730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.498738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.498912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.498920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.499091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.499100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.499375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.499383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.499648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.499656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.499966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.499977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.500232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.500240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.500572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.500580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.500759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.500767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.501076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.501085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.501349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.501357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.501658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.501666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.501955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.501964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.502237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.502247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.502502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.502511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.502686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.502696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.503002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.503011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.503310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.503319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.503611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.503619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.503914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.503923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.504217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.504225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.504507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.504515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.504804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.504812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.505082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.505091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.505387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.505395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.505703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.505712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.506000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.506009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.506308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.506318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.506492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.506502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.506776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.506785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.507075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.507084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.507352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.953 [2024-10-01 16:54:54.507360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.953 qpair failed and we were unable to recover it. 00:30:02.953 [2024-10-01 16:54:54.507644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.507652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.507918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.507929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.508248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.508257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.508532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.508540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.508866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.508874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.509159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.509167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.509462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.509470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.509769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.509778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.510081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.510089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.510365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.510373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.510659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.510667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.510935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.510944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.511269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.511277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.511435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.511443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.511618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.511626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.511900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.511908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.512204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.512213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.512480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.512489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.512788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.512797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.513067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.513076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.513374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.513383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.513688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.513697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.514027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.514036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.514330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.514339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.514631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.514640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.514948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.514957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.515282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.515291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.515601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.515610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.515900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.515910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.516078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.516087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.516428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.516438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.516710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.516719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.517010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.517018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.517211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.517219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.517510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.517519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.517826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.517834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.518005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.518014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.518255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.518265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.954 qpair failed and we were unable to recover it. 00:30:02.954 [2024-10-01 16:54:54.518563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.954 [2024-10-01 16:54:54.518571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.518823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.518831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.519122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.519130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.519396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.519406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.519688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.519696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.520001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.520011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.520319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.520327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.520601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.520609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.520918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.520927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.521280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.521288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.521579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.521588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.521893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.521902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.522192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.522202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.522475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.522483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.522625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.522633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.522787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.522796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.523067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.523076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.523345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.523353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.523673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.523682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.523948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.523955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.524250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.524258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.524571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.524580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.524860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.524868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.525168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.525178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.525471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.525479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.525657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.525665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.525954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.525963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.526245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.526254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.526512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.526521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.526700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.526709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.526986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.526996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.527286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.527294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.527584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.527594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.527868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.527877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.528169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.528177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.528483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.528491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.528788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.528798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.529064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.529072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.529362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.955 [2024-10-01 16:54:54.529371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.955 qpair failed and we were unable to recover it. 00:30:02.955 [2024-10-01 16:54:54.529636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.529644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.529814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.529822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.530050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.530058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.530314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.530322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.530598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.530610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.530900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.530908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.531240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.531248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.531565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.531574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.531845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.531854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.532149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.532158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.532307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.532316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.532603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.532611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.532826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.532833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.533126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.533134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.533397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.533407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.533674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.533683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.533994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.534002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.534301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.534310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.534623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.534631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.534892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.534900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.535185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.535194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.535482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.535490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.535757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.535766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.536057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.536066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.536307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.536315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.536618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.536627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.536888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.536895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.537155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.537163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.537458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.537468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.537704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.537713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.537979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.537988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.538258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.538267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.538498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.538507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.538807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.538816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.538968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.538983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.539280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.539288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.539605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.539614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.539907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.539915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.540256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.956 [2024-10-01 16:54:54.540264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.956 qpair failed and we were unable to recover it. 00:30:02.956 [2024-10-01 16:54:54.540548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.540556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.540819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.540827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.541105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.541113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.541379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.541387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.541676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.541685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.541976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.541988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.542321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.542329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.542602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.542610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.542885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.542894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.543202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.543210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.543497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.543506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.543772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.543782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.544062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.544071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.544362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.544370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.544656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.544665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.544944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.544953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.545229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.545237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.545522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.545530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.545812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.545821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.546102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.546111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.546397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.546406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.546670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.546678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.546968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.546979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.547264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.547273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.547565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.547574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.547841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.547850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.548147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.548156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.548463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.548473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.548765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.548773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.549068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.549076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.549276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.549285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.549534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.549542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.549803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.549811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.550088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.550097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.957 [2024-10-01 16:54:54.550393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.957 [2024-10-01 16:54:54.550402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.957 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.550685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.550695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.550857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.550866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.551141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.551149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.551438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.551446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.551720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.551728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.552026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.552035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.552244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.552252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.552546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.552555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.552863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.552872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.553121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.553129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.553448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.553458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.553764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.553773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.554038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.554046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.554345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.554354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.554658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.554666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.554984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.554994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.555251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.555259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.555557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.555566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.555833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.555841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.556133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.556141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.556431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.556439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.556738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.556747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.556921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.556930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.557184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.557192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.557473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.557481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.557780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.557789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.558090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.558099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.558414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.558423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.558729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.558737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.558944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.558952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.559209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.559218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.559496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.559504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.559690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.559699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.559974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.559983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.560269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.560277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.560566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.560574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.560843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.560851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.561233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.958 [2024-10-01 16:54:54.561241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.958 qpair failed and we were unable to recover it. 00:30:02.958 [2024-10-01 16:54:54.561556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.561565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.561852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.561860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.562162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.562171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.562439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.562447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.562734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.562742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.562916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.562925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.563191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.563200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.563372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.563381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.563661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.563669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.563959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.563967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.564269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.564278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.564548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.564557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.564846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.564857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.565127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.565135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.565402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.565411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.565698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.565706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.566013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.566022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.566310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.566319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.566587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.566595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.566738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.566747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.567019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.567028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.567315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.567325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.567593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.567601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.567917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.567926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.568194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.568202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.568491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.568501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.568767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.568777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.569068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.569076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.569338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.569346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.569652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.569660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.570003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.570011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.570298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.570306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.570488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.570496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.570786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.570794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.571103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.571111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.571400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.571408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.571694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.571702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.571986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.571996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.572353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.959 [2024-10-01 16:54:54.572361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.959 qpair failed and we were unable to recover it. 00:30:02.959 [2024-10-01 16:54:54.572545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.572553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.572823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.572831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.573015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.573023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.573285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.573293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.573594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.573603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.573885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.573893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.574149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.574159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.574441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.574450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.574737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.574746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.575051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.575059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.575346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.575354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.575685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.575693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.576057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.576065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.576320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.576329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.576628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.576637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.576816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.576824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.576996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.577004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.577303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.577312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.577591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.577600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.577909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.577918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.578213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.578222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.578491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.578500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.578797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.578806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.579078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.579087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.579392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.579400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.579720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.579729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.580040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.580048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.580355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.580363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.580655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.580663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.580977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.580985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.581263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.581272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.581576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.581585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.581863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.581872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.582031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.582040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.582347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.582355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.582623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.582631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.582923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.582933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.583214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.583223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.960 qpair failed and we were unable to recover it. 00:30:02.960 [2024-10-01 16:54:54.583385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.960 [2024-10-01 16:54:54.583393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.583566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.583575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.583822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.583830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.584098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.584106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.584404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.584414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.584709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.584718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.585022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.585031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.585309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.585317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.585616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.585624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.585887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.585895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.586187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.586197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.586470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.586478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.586634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.586643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.586951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.586960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.587231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.587239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.587418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.587430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.587675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.587684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.587956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.587964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.588162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.588170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.588464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.588472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.588646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.588655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.588954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.588963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.589146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.589153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.589489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.589498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.589655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.589664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.589987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.589995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.590165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.590173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.590490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.590498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.590665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.590674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.590866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.590875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.591162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.591171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.591471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.591480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.591748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.591758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.592028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.592036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.592184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.961 [2024-10-01 16:54:54.592191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.961 qpair failed and we were unable to recover it. 00:30:02.961 [2024-10-01 16:54:54.592351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.592360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.592564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.592573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.592739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.592749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.593024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.593032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.593328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.593337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.593635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.593644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.593923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.593931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.594133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.594142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.594311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.594320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.594508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.594517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.594784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.594792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.594971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.594980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.595169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.595176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.595438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.595447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.595619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.595626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.595873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.595881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.596179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.596188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.596536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.596544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.596811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.596819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.597017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.597026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.597311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.597322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.597620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.597628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.597906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.597914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.598204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.598213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.598472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.598480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.598790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.598798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.599073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.599082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.599352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.599361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.599679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.599688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.599946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.599954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.600228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.600236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.600538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.600547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.600714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.600723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.600974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.600984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.601246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.601254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.601547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.601556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.601867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.601876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.602173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.602185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.962 [2024-10-01 16:54:54.602499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.962 [2024-10-01 16:54:54.602507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.962 qpair failed and we were unable to recover it. 00:30:02.963 [2024-10-01 16:54:54.602689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.963 [2024-10-01 16:54:54.602697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:02.963 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.602978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.602987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.603278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.603288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.603626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.603635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.603922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.603930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.604057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.604069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.604266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.604275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.604568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.604577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.604757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.604765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.605083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.605091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.605308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.605316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.605609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.605619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.605921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.605929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.606221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.606230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.606610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.606619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.606877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.606885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.607201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.607209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.607520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.607528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.607833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.607842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.608127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.608135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.608309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.608317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.608626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.608636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.608893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.608902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.609093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.609101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.609360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.237 [2024-10-01 16:54:54.609367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.237 qpair failed and we were unable to recover it. 00:30:03.237 [2024-10-01 16:54:54.609680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.609689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.610010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.610020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.610287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.610296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.610586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.610595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.610863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.610872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.611183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.611192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.611478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.611486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.611777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.611786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.611965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.611975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.612237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.612245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.612509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.612518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.612825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.612834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.613126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.613135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.613423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.613431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.613701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.613709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.614009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.614017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.614290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.614298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.614592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.614600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.614879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.614887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.615038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.615046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.615331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.615341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.615612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.615620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.615917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.615925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.616225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.616233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.616507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.616514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.616813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.616822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.617134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.617144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.617438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.617446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.617721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.617730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.618015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.618024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.618209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.618217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.618420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.618429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.618710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.618718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.618893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.618901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.619202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.619211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.619375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.619383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.619727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.619738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.620102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.238 [2024-10-01 16:54:54.620111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.238 qpair failed and we were unable to recover it. 00:30:03.238 [2024-10-01 16:54:54.620291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.620299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.620552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.620561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.620766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.620775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.621076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.621084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.621437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.621445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.621614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.621621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.621895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.621904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.622095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.622104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.622300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.622310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.622547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.622556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.622862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.622872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.623165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.623174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.623500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.623508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.623776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.623784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.623958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.623966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.624280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.624289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.624549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.624557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.624855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.624864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.624995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.625003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.625270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.625278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.625558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.625567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.625860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.625868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.626128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.626136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.626416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.626425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.626711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.626720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.626904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.626914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.627179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.627188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.627476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.627485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.627753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.627762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.628126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.628134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.628437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.628446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.628699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.628708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.628882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.628890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.629194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.629202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.629475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.629484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.629772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.629781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.629943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.629951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.630218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.630227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.239 [2024-10-01 16:54:54.630508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.239 [2024-10-01 16:54:54.630517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.239 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.630792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.630801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.630947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.630957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.631255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.631264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.631573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.631582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.631778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.631787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.632053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.632062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.632207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.632215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.632547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.632556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.632848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.632856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.633127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.633135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.633469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.633479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.633791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.633799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.634060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.634069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.634390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.634398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.634676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.634684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.634978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.634986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.635271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.635280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.635459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.635466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.635756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.635765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.635926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.635934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.636283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.636292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.636595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.636604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.636871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.636879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.637053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.637062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.637245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.637253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.637434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2051960 is same with the state(6) to be set 00:30:03.240 [2024-10-01 16:54:54.638075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.638165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.638451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.638490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.638816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.638827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.639116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.639125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.639411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.639419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.639721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.639730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.640011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.640019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.640303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.640311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.640581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.640590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.640898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.640907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.641077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.641085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.641398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.641406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.240 qpair failed and we were unable to recover it. 00:30:03.240 [2024-10-01 16:54:54.641732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.240 [2024-10-01 16:54:54.641740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.642020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.642028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.642324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.642332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.642612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.642621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.642759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.642768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.643075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.643084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.643398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.643406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.643749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.643757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.644023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.644031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.644344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.644352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.644501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.644510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.644764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.644774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.645063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.645072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.645382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.645391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.645650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.645658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.645943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.645956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.646241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.646249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.646563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.646572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.646846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.646854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.647153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.647162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.647463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.647471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.647785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.647793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.648102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.648111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.648469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.648478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.648775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.648784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.649072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.649080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.649350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.649358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.649665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.649673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.649848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.649857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.650053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.650062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.650341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.650350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.650573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.650581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.241 [2024-10-01 16:54:54.650870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.241 [2024-10-01 16:54:54.650878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.241 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.651168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.651176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.651490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.651498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.651773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.651781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.651940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.651948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.652233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.652242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.652514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.652523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.652875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.652883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.653155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.653164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.653437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.653446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.653705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.653714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.654057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.654066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.654373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.654382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.654642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.654650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.654902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.654910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.655157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.655165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.655467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.655476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.655745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.655754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.655977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.655986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.656276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.656284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.656548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.656556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.656701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.656710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.657016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.657025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.657338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.657349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.657641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.657649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.657963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.657974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.658264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.658272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.658560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.658569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.658840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.658849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.659127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.659136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.659425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.659433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.659723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.659733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.660025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.660033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.660325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.660333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.660595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.660603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.660895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.660904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.661089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.661098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.661367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.661375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.242 qpair failed and we were unable to recover it. 00:30:03.242 [2024-10-01 16:54:54.661680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.242 [2024-10-01 16:54:54.661689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.661978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.661986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.662280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.662288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.662556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.662564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.662854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.662863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.663126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.663135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.663403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.663412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.663701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.663709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.663985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.663993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.664282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.664290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.664589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.664598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.664864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.664873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.665147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.665156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.665313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.665323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.665602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.665610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.665903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.665912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.666211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.666219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.666490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.666498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.666804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.666813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.667107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.667124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.667389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.667397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.667664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.667672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.667975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.667983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.668254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.668262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.668566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.668575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.668864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.668875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.669144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.669153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.669429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.669437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.669726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.669735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.670038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.670048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.670378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.670386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.670545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.670554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.670832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.670841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.671127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.671136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.671415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.671425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.671700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.671708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.672000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.672010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.672315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.672324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.243 [2024-10-01 16:54:54.672602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.243 [2024-10-01 16:54:54.672612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.243 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.672805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.672813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.673063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.673071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.673357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.673365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.673670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.673679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.673958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.673966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.674264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.674274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.674550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.674558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.674853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.674862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.675139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.675148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.675446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.675455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.675749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.675757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.675927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.675935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.676198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.676207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.676498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.676506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.676776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.676784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.677065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.677073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.677371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.677380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.677647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.677655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.677922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.677930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.678254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.678262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.678543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.678552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.678857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.678865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.679127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.679135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.679401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.679410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.679683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.679691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.680011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.680020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.680287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.680297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.680567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.680575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.680857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.680865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.681127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.681136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.681404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.681412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.681708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.681716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.681998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.682006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.682317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.682326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.682616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.682625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.682893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.682902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.683256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.683265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.244 [2024-10-01 16:54:54.683571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.244 [2024-10-01 16:54:54.683580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.244 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.683739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.683749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.683905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.683914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.684229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.684237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.684508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.684515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.684783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.684791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.685083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.685092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.685343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.685351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.685629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.685638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.685930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.685938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.686271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.686279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.686579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.686587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.686875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.686883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.687165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.687173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.687443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.687452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.687604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.687613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.687802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.687810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.688095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.688104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.688434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.688444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.688756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.688764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.689002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.689010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.689311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.689320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.689587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.689595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.689874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.689882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.690174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.690183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.690358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.690367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.690531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.690540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.690798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.690808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.691077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.691086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.691366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.691378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.691662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.691671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.691946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.691954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.692117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.692127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.692382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.692390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.692697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.692706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.692975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.692983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.693245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.693253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.693528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.693536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.693873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.245 [2024-10-01 16:54:54.693881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.245 qpair failed and we were unable to recover it. 00:30:03.245 [2024-10-01 16:54:54.694129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.694137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.694309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.694317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.694629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.694637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.694932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.694941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.695253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.695261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.695552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.695561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.695851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.695860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.696158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.696167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.696474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.696483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.696771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.696781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.697048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.697056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.697340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.697350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.697650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.697658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.697952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.697960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.698243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.698252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.698541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.698549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.698858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.698866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.699151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.699161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.699456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.699464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.699736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.699745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.700013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.700021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.700316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.700325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.700632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.700640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.700936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.700944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.701272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.701281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.701556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.701564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.701870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.701878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.702172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.702180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.702486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.702494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.702712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.702720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.702985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.702994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.703306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.703314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.703504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.703511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.246 qpair failed and we were unable to recover it. 00:30:03.246 [2024-10-01 16:54:54.703818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.246 [2024-10-01 16:54:54.703826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.704096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.704105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.704287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.704295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.704495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.704503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.704781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.704789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.705073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.705083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.705379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.705387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.705692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.705700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.705993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.706001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.706195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.706203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.706373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.706382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.706696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.706705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.706997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.707006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.707311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.707319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.707630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.707639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.707933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.707942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.708224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.708232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.708385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.708394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.708565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.708574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.708768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.708776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.709054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.709062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.709213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.709221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.709537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.709546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.709817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.709826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.710025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.710035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.710340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.710349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.710618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.710626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.710915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.710924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.711198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.711206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.711473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.711481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.711770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.711780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.712071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.712080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.712366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.712374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.712678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.712687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.712967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.712978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.713286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.713295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.713587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.713595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.247 [2024-10-01 16:54:54.713887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.247 [2024-10-01 16:54:54.713897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.247 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.714184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.714193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.714451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.714459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.714728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.714737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.715027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.715036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.715333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.715341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.715693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.715702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.716018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.716026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.716179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.716186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.716475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.716483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.716779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.716787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.717075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.717083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.717266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.717274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.717552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.717560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.717855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.717864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.718166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.718175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.718451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.718459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.718758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.718767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.719080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.719090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.719392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.719401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.719689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.719698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.719955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.719964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.720260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.720269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.720558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.720567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.720868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.720877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.721159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.721168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.721430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.721438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.721727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.721737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.722007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.722015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.722306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.722315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.722586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.722593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.722861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.722869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.723130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.723138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.723423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.723431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.723586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.723594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.723856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.723864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.724140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.724148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.724486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.724495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.724784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.248 [2024-10-01 16:54:54.724792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.248 qpair failed and we were unable to recover it. 00:30:03.248 [2024-10-01 16:54:54.725071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.725079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.725406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.725414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.725717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.725727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.726038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.726047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.726207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.726216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.726479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.726488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.726752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.726760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.727059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.727068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.727371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.727379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.727695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.727703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.727981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.727989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.728288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.728297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.728586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.728594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.728909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.728918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.729109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.729118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.729419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.729427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.729694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.729702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.730003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.730011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.730293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.730301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.730588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.730596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.730883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.730892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.731182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.731191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.731462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.731470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.731765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.731773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.732083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.732092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.732427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.732435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.732645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.732653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.732814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.732823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.733108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.733118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.733282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.733290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.733681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.733774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.734121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.734161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.734460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.734471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.734770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.734778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.735054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.735062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.735383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.735392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.735711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.735719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.736035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.249 [2024-10-01 16:54:54.736045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.249 qpair failed and we were unable to recover it. 00:30:03.249 [2024-10-01 16:54:54.736192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.736201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.736499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.736507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.736814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.736824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.737177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.737185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.737467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.737477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.737789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.737797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.738106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.738115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.738421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.738429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.738692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.738700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.738998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.739006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.739260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.739269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.739573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.739582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.739885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.739894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.740203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.740211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.740489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.740498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.740787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.740796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.740986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.740994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.741281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.741289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.741592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.741601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.741913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.741922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.742261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.742270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.742534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.742542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.742816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.742825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.743096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.743104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.743290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.743299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.743602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.743611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.743882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.743891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.744176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.744184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.744454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.744462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.744731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.744739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.745066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.745076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.745350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.745359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.745666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.745674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.745823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.745831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.745962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.745972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.250 [2024-10-01 16:54:54.746243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.250 [2024-10-01 16:54:54.746252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.250 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.746521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.746529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.746823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.746832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.747131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.747139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.747323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.747331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.747584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.747593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.747895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.747903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.748168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.748177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.748438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.748446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.748607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.748617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.748848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.748856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.749224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.749233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.749505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.749513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.749811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.749819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.750036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.750044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.750331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.750339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.750631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.750640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.750958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.750967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.751239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.751247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.751536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.751545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.751744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.751753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.752032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.752041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.752342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.752350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.752508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.752516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.752777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.752785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.753079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.753088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.753371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.753379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.753692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.753701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.753992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.754000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.754310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.754321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.754507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.754516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.754823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.754831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.755023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.755032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.755306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.755314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.755611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.755619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.755929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.755940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.756215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.756223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.756402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.756410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.251 [2024-10-01 16:54:54.756710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.251 [2024-10-01 16:54:54.756718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.251 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.756979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.756987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.757254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.757263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.757529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.757536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.757812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.757821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.758111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.758119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.758400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.758408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.759107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.759124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.759402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.759412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.759719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.759728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.760003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.760013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.760312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.760321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.760604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.760613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.760879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.760887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.761188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.761197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.761465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.761474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.762185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.762203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.762506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.762513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.762812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.762819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.763145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.763153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.763313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.763320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.763475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.763483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.763776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.763783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.764046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.764053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.764385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.764393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.764551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.764559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.764834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.764842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.765129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.765137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.765389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.765396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.765683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.765690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.765980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.765988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.766265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.766272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.766556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.766563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.766714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.766722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.767040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.767048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.767229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.767237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.767515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.767522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.767682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.767696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.767997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.252 [2024-10-01 16:54:54.768005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.252 qpair failed and we were unable to recover it. 00:30:03.252 [2024-10-01 16:54:54.768286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.768293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.768609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.768616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.768884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.768892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.769222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.769230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.769510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.769518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.769831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.769839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.770136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.770143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.770429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.770436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.770709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.770716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.771034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.771042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.771308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.771315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.771624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.771632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.771897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.771904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.772123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.772130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.772399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.772407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.772698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.772705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.773006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.773014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.773359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.773366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.773667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.773675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.773957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.773964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.774235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.774243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.774529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.774537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.774835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.774843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.775142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.775150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.775426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.775433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.775709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.775716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.775990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.775997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.776276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.776282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.776457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.776464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.776733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.776740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.777046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.777054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.777339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.777347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.777612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.777619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.777903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.777910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.778247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.778254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.778549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.778556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.778912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.778919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.779261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-10-01 16:54:54.779268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-10-01 16:54:54.779577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.779586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.779860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.779868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.780232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.780239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.780432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.780439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.780734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.780741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.781129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.781136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.781416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.781423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.781724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.781731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.782010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.782017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.782296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.782303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.782593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.782600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.782884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.782891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.783194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.783202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.783368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.783376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.783647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.783654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.783930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.783937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.784232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.784239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.784397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.784405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.784666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.784673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.784948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.784955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.785238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.785245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.785520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.785527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.785807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.785814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.786108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.786116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.786411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.786419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.786703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.786711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.786997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.787005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.787173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.787180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.787376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.787383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.787649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.787656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.787933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.787940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.788267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.788275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.788550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.788557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.788826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.788833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.789112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.789119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.789284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.789292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.789552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.789559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.789868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-10-01 16:54:54.789876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-10-01 16:54:54.789922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.789930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.790179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.790186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.790457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.790466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.790757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.790764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.790942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.790950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.791219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.791226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.791501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.791508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.791779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.791786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.792106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.792113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.792414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.792421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.792692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.792699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.792893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.792900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.793204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.793211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.793536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.793544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.793846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.793854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.794127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.794135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.794413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.794420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.794724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.794731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.795012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.795020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.795315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.795322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.795637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.795644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.795912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.795920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.796202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.796209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.796534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.796542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.796706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.796713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.797037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.797044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.797317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.797324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.797590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.797597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.797891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.797898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.798182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.798189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.798486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.798493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.798804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.798812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.799113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.799120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.799420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.799427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-10-01 16:54:54.799703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-10-01 16:54:54.799710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.799985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.799992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.800280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.800286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.800604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.800611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.800880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.800887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.801169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.801176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.801457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.801464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.801744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.801752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.802080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.802089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.802371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.802378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.802659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.802666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.802976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.802983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.803316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.803323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.803603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.803611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.803912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.803919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.804167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.804174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.804466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.804472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.804635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.804642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.804963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.804972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.805244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.805251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.805572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.805579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.805874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.805880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.806156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.806164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.806486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.806493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.806814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.806821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.807103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.807110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.807411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.807418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.807725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.807732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.807884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.807891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.808090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.808097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.808431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.808438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.808714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.808721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.809060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.809067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.809363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.809369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.809682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.809689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.810009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.810017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.810431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.810438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.810722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.810729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-10-01 16:54:54.810997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-10-01 16:54:54.811004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.811209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.811216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.811484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.811491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.811778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.811784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.811960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.811967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.812178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.812185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.812460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.812467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.812761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.812768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.812986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.812993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.813283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.813290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.813615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.813624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.813899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.813906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.814248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.814255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.814558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.814565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.814728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.814735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.814922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.814929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.815256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.815263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.815539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.815545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.815859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.815867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.816177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.816184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.816327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.816334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.816596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.816604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.816890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.816897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.817176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.817183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.817529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.817536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.817816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.817822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.818099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.818107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.818387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.818394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.818664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.818671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.819020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.819028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.819329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.819336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.819635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.819642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.819798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.819805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.820065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.820072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.820352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.820359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.820640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.820646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.820955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.820962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.821239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.821246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.821563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-10-01 16:54:54.821569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-10-01 16:54:54.821915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.821922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.822209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.822217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.822521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.822527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.822790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.822797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.823106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.823114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.823414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.823421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.823740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.823747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.824061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.824068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.824340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.824346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.824641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.824648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.824927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.824934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.825215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.825222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.825504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.825511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.825696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.825703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.826017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.826025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.826283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.826289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.826569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.826576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.826857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.826864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.827147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.827154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.827506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.827514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.827822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.827829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.828016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.828023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.828312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.828319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.828593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.828600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.828903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.828911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.829201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.829209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.829532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.829539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.829708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.829714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.829906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.829913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.830263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.830271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.830536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.830543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.830816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.830823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.831097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.831104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.831288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.831295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.831614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.831621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.831894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.831901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.832049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.832057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.832303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.832310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-10-01 16:54:54.832619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-10-01 16:54:54.832628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.832905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.832912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.833060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.833068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.833400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.833407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.833744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.833751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.834021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.834028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.834347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.834354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.834670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.834677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.834876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.834883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.835066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.835073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.835373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.835380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.835571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.835578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.835759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.835767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.835982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.835989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.836326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.836333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.836645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.836652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.836965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.836979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.837255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.837262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.837439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.837446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.837749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.837756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.838065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.838072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.838397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.838404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.838711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.838718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.838899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.838906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.839274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.839281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.839637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.839644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.839928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.839934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.840273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.840281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.840580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.840587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.840795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.840802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.840994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.841001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.841344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.841351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.841627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.841634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.841975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.841982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.842260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.842268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.842460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-10-01 16:54:54.842467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-10-01 16:54:54.842667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.842674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.842864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.842871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.843148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.843155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.843468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.843475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.843525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.843534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.843833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.843840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.844172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.844180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.844464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.844471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.844682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.844689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.844994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.845002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.845307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.845314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.845602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.845609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.845954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.845960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.846038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.846045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.846292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.846299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.846622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.846629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.846950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.846957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.847145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.847152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.847328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.847336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.847515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.847523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.847833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.847841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.848132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.848140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.848480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.848487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.848796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.848803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.849111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.849118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.849289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.849296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.849569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.849576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.849882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.849889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.850051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.850058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.850249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.850256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.850439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.850446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.850702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.850709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.850997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.851004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.851213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.851220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.851510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.851517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.851810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.851817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.852082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-10-01 16:54:54.852089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-10-01 16:54:54.852446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.852453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.852630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.852638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.852799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.852806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.853091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.853098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.853377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.853384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.853562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.853569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.853893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.853900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.854232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.854241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.854521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.854528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.854894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.854900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.855060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.855068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.855386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.855393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.855695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.855702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.856015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.856023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.856223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.856230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.856531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.856538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.856710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.856718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.857006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.857013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.857304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.857311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.857625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.857633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.857949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.857956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.858193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.858201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.858456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.858464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.858601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.858608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.858765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.858772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.859055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.859062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.859225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.859232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.859608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.859614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.859807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.859814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.860114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.860122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.860399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.860406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.860571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.860578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.860961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.860970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.861252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.861259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.861442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.861449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.861633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.861640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.861966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.861975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.862301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.862308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-10-01 16:54:54.862612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-10-01 16:54:54.862619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.862918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.862925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.863251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.863258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.863541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.863548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.863852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.863859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.864274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.864282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.864571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.864578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.864875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.864882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.865215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.865222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.865524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.865532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.865824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.865831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.866012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.866019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.866397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.866404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.866740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.866747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.867042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.867049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.867353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.867360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.867630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.867637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.867944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.867950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.868308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.868315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.868598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.868605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.868884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.868892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.869060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.869068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.869378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.869385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.869678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.869686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.870008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.870015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.870319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.870326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.870628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.870635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.870932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.870938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.871129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.871136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.871535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.871541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.871841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.871848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.872032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.872039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.872375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.872381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.872689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.872696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.872985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.872992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.873353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.873360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.873669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.873676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.874022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.874029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.874314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-10-01 16:54:54.874321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-10-01 16:54:54.874500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.874507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.874819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.874826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.875007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.875015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.875377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.875384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.875677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.875684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.875963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.875971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.876174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.876181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.876465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.876471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.876628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.876635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.876938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.876945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.877287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.877296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.877636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.877643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.877807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.877814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.878097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.878104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.878390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.878397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.878582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.878589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.878879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.878886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.879049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.879056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.879223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.879230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.879501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.879508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.879792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.879799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.880101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.880108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.880464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.880471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.880741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.880748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.881066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.881074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.881270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.881277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.881603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.881610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.881981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.881988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.882352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.882359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.882526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.882534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.882871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.882877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.883014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.883021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.883398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.883405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.883592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.883599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.883788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.883795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.884056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.884063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.884244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.884251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.884578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.884585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-10-01 16:54:54.884904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-10-01 16:54:54.884911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.885254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.885261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.885567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.885574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.885747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.885754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.886093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.886100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.886343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.886349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.886624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.886631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.886965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.886979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.887267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.887274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.887578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.887584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.887753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.887760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.887925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.887932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.888256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.888265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.888424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.888431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.888734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.888740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.889012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.889019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.889351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.889358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.889541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.889548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.889830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.889837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.890186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.890193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.890471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.890478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.890835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.890842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.891135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.891143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.891300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.891308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.891556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.891562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.891757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.891764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.891940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.891947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.892268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.892275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.892570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.892577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.892880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.892887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.893179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.893186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.893556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.893563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.893733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.893740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.894004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.894011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-10-01 16:54:54.894370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-10-01 16:54:54.894376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.894709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.894716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.895031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.895038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.895327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.895334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.895629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.895636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.895824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.895831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.896098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.896105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.896477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.896484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.896781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.896788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.897081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.897089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.897369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.897376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.897698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.897705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.897889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.897896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.898180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.898187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.898496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.898503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.898823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.898830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.899112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.899119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.899409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.899416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.899556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.899564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.899927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.899934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.900223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.900230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.900500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.900507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.900798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.900806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.901007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.901015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.901275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.901282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.901579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.901586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.901752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.901759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.902031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.902039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.902109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.902115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.902397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.902404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.902670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.902677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.902963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.902975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.903252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.903259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.903376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.903383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.903658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.903664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.903989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.903996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.904348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.904356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.904643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.904650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.904964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-10-01 16:54:54.904975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-10-01 16:54:54.905301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.905309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.905579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.905587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.906002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.906010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.906312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.906318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.906604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.906611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.906929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.906936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.907290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.907297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.907577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.907583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.907860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.907867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.908182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-10-01 16:54:54.908189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-10-01 16:54:54.908371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.908379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.908697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.908704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.909002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.909010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.909309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.909316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.909593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.909599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.909930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.909937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.910135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.910142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.910421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.910428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.910596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.910603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.910866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.910874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.911311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.911319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.911617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.911624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.911945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.911952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.912306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.912313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.912472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.912479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.912744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.912751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.913093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.913100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.913421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.913428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.913727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.913734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.913981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.913988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.914262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.914269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.914424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.914432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.914680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.914687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.914999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.915007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.915307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.915314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.915595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.915601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.915882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.915889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.916187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.916195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.916530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.916537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.916715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.916722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.917029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.917036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.917368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.917376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.917702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.917709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.918017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-10-01 16:54:54.918024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-10-01 16:54:54.918310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.918317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.918584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.918591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.918892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.918900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.919922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.919929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.920226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.920233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.920512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.920519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.920802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.920809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.921096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.921103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.921420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.921427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.921788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.921795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.922107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.922116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.922420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.922427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.922705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.922713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.923025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.923033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.923329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.923336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.923655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.923662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.923942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.923949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.924102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.924110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.924382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.924389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.924711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.924718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.925034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.925041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.925374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.925381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.925669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.925676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.926031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.926039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.926365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-10-01 16:54:54.926372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-10-01 16:54:54.926707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.926714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.927019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.927027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.927400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.927407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.927690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.927697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.927988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.927995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.928318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.928325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.928625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.928631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.928939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.928946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.929244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.929252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.929564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.929571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.929874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.929881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.930185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.930192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.930474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.930481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.930775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.930782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.931146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.931154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.931435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.931442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.931746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.931753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.931861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.931868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.932171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.932179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.932484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.932491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.932798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.932805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.933100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.933107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.933385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.933392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.933671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.933678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.933854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.933861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.934136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.934145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.934426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.934433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.934716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.934723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.935016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-10-01 16:54:54.935023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-10-01 16:54:54.935320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.935326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.935601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.935607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.935921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.935928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.936208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.936215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.936484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.936491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.936769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.936776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.937079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.937086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.937334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.937341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.937708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.937715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.938032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.938039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.938315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.938322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.938477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.938484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.938804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.938810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.939100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.939107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.939287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.939295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.939556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.939562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.939829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.939836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.940038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.940046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.940367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.940373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.940663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.940669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.940988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.940996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-10-01 16:54:54.941179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-10-01 16:54:54.941187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde64000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Read completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.545 Write completed with error (sct=0, sc=8) 00:30:03.545 starting I/O failed 00:30:03.546 Read completed with error (sct=0, sc=8) 00:30:03.546 starting I/O failed 00:30:03.546 Read completed with error (sct=0, sc=8) 00:30:03.546 starting I/O failed 00:30:03.546 [2024-10-01 16:54:54.941388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.546 [2024-10-01 16:54:54.941716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.941731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.942175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.942204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.942534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.942542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.942832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.942839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.943221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.943249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.943582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.943590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.943882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.943889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.944184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.944192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.944502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.944509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.944835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.944842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.945182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.945190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.945519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.945526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.945753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.945760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.945941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.945948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.946210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.946217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.946521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.946528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.946883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.946890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.947099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.947107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.947417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.947425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.947733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.947740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.948023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.948031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.948340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.948348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.948662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.948669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.948844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.948851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.949149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.949156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.949473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.949480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.949771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.949779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.950074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-10-01 16:54:54.950082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-10-01 16:54:54.950400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.950408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.950592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.950600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.950890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.950898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.951222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.951229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.951534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.951541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.951850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.951857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.952018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.952028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.952323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.952331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.952637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.952644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.952836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.952842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.952990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.952998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.953484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.953491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.953667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.953674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.953954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.953961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.954301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.954308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.954586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.954593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.954870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.954877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.955062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.955070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.955252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.955260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.955546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.955553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.955813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.955819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.956146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.956153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.956486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.956493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.956768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.956775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.957105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.957112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.957392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.957399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.957711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.957717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.958059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.958066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-10-01 16:54:54.958360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-10-01 16:54:54.958367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.958664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.958671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.958976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.958984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.959330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.959337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.959652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.959659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.960015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.960022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.960306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.960313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.960482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.960489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.960776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.960783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.961076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.961083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.961360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.961367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.961527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.961535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.961811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.961818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.962122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.962129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.962447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.962454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.962762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.962769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.963044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.963052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.963332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.963339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.963507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.963517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.963813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.963820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.964004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.964011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.964322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.964329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.964516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.964523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.964852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.964858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.965035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.965043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.965368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.965375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.965698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.965705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.965851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.965859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.966132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.966139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.966309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.966317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.966525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.966532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.966848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.966855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.967154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.967161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.967438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.967445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.548 [2024-10-01 16:54:54.967714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.548 [2024-10-01 16:54:54.967721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.548 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.968008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.968015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.968352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.968359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.968534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.968542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.968800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.968807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.969107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.969114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.969292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.969300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.969563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.969570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.969812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.969820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.970056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.970063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.970248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.970255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.970452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.970459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.970612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.970619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.970943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.970950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.971235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.971243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.971413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.971419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.971677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.971684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.971930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.971937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.972251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.972258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.972590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.972597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.972760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.972767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.973083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.973090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.973379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.973386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.973666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.973673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.973976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.973986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.974179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.974186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.974467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.974474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.974752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.974759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.975040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.975047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.975359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.975366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.975645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.975652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.975937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.975944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.976226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.549 [2024-10-01 16:54:54.976233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.549 qpair failed and we were unable to recover it. 00:30:03.549 [2024-10-01 16:54:54.976422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.976429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.976673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.976680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.976962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.976971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.977258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.977264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.977561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.977568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.977907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.977914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.978190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.978197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.978475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.978482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.978608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.978615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.978915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.978922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.979207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.979215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.979507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.979513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.979795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.979802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.980078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.980085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.980252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.980258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.980535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.980542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.980824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.980831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.981127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.981134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.981416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.981423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.981574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.981582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.981763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.981770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.982068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.982076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.982238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.982245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.982526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.982534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.982809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.982816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.983123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.983130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.983405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.983412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.983593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.983600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.983861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.983868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.984145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.984152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.984419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.984426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.984707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.550 [2024-10-01 16:54:54.984715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.550 qpair failed and we were unable to recover it. 00:30:03.550 [2024-10-01 16:54:54.985025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.985032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.985320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.985327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.985610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.985618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.985912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.985919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.986212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.986219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.986405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.986412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.986583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.986590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.986862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.986869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.987133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.987140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.987536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.987543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.987838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.987845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.988132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.988139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.988423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.988430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.988732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.988739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.989017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.989024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.989328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.989334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.989613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.989620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.989901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.989908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.990240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.990247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.990410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.990418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.990711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.990717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.991062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.991069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.991371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.991378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.991572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.991579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.991869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.991875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.992172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.992179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.992500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.992507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.992824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.992830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.993116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.993124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.993305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.993312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.993606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.551 [2024-10-01 16:54:54.993614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.551 qpair failed and we were unable to recover it. 00:30:03.551 [2024-10-01 16:54:54.993922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.993929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.994229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.994236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.994546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.994553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.994729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.994736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.995015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.995022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.995202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.995210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.995533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.995540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.995882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.995889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.996205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.996214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.996487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.996494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.996772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.996779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.997095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.997102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.997365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.997372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.997661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.997668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.997950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.997957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.998305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.998313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.998597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.998604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.998972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.998979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.999288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.999295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.999591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.999598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:54.999880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:54.999887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.000183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.000190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.000467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.000474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.000783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.000790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.000972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.000979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.001263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.001270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.552 qpair failed and we were unable to recover it. 00:30:03.552 [2024-10-01 16:54:55.001555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.552 [2024-10-01 16:54:55.001563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.001872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.001879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.002180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.002188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.002372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.002380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.002710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.002717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.002998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.003005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.003300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.003307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.003596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.003603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.003881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.003888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.004125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.004133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.004351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.004358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.004604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.004611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.004774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.004781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.005090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.005098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.005418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.005425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.005715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.005722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.005871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.005878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.006229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.006236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.006515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.006522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.006809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.006816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.007127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.007135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.007405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.007412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.007716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.007725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.007909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.007916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.008086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.008094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.008283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.008290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.008577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.008584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.008882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.008888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.009189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.009196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.009529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.009536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.009837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.009843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.010129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.553 [2024-10-01 16:54:55.010136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.553 qpair failed and we were unable to recover it. 00:30:03.553 [2024-10-01 16:54:55.010448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.010455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.010733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.010740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.011049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.011056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.011393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.011400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.011699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.011706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.012043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.012051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.012343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.012350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.012629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.012636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.012949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.012956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.013239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.013246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.013564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.013571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.013864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.013872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.014173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.014180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.014447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.014454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.014760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.014767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.015119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.015126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.015438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.015445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.015722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.015729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.016009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.016016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.016354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.016361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.016681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.016688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.017012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.017020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.017309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.017315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.017590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.017597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.017874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.017881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.018164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.018171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.018442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.018449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.018645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.018652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.018929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.018936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.019236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.019243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.019573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.019581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.554 qpair failed and we were unable to recover it. 00:30:03.554 [2024-10-01 16:54:55.019857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.554 [2024-10-01 16:54:55.019864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.020181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.020188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.020489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.020496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.020794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.020801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.021090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.021097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.021419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.021426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.021574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.021581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.021897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.021904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.022205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.022212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.022530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.022537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.022810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.022817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.023138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.023146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.023423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.023430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.023748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.023755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.024057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.024064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.024372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.024379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.024682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.024689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.024970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.024977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.025271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.025278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.025557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.025564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.025839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.025846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.026215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.026222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.026505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.026512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.026792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.026799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.027080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.027087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.027363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.027370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.027652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.027661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.027809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.027817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.028143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.028150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.028469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.028476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.028751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.028758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.028967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.028978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.029329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.555 [2024-10-01 16:54:55.029335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.555 qpair failed and we were unable to recover it. 00:30:03.555 [2024-10-01 16:54:55.029613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.029620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.029887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.029894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.030188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.030195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.030540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.030547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.030832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.030839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.031136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.031143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.031465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.031472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.031773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.031780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.032009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.032017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.032271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.032278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.032564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.032571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.032877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.032883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.033165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.033172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.033475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.033483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.033651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.033659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.033941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.033948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.034212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.034219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.034539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.034546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.034824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.034831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.035100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.035107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.035429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.035436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.035755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.035763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.036062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.036069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.036226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.036234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.036489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.036496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.036799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.036806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.037101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.037108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.037420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.037427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.037748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.037755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.038056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.038063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.038427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.038434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.038731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.038737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.039013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.039021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.556 [2024-10-01 16:54:55.039297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.556 [2024-10-01 16:54:55.039305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.556 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.039618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.039625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.039901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.039908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.040223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.040230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.040389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.040396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.040667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.040674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.040964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.040973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.041259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.041266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.041433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.041440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.041706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.041713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.042007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.042015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.042301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.042308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.042591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.042598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.042781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.042787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.043069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.043077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.043400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.043406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.043682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.043689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.043999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.044006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.044302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.044309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.044613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.044620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.044900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.044907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.045199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.045207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.045546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.045554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.045706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.045713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.045988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.045996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.046340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.046347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.046676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.046683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.046959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.046966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.047293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.047300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.047615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.047621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.557 [2024-10-01 16:54:55.047900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.557 [2024-10-01 16:54:55.047907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.557 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.048071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.048079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.048352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.048359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.048646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.048653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.048930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.048937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.049109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.049116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.049393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.049400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.049591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.049599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.049897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.049904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.050166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.050174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.050466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.050474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.050712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.050719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.050874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.050881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.051151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.051159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.051492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.051499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.051843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.051850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.052131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.052139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.052462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.052470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.052755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.052762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.053069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.053076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.053419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.053426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.053708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.053715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.053877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.053884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.054176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.054183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.054474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.054481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.054762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.054769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.055115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.055122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.055293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.055300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.055560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.055567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.055862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.055869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.056161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.056168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.056450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.056456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.558 qpair failed and we were unable to recover it. 00:30:03.558 [2024-10-01 16:54:55.056771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.558 [2024-10-01 16:54:55.056778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.057054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.057061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.057390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.057397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.057688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.057695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.057976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.057984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.058283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.058290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.058595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.058602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.058904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.058911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.059240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.059247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.059535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.059542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.059825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.059832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.060128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.060135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.060417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.060424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.060714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.060721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.061028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.061036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.061213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.061221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.061523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.061531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.061836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.061842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.062028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.062039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.062333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.062340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.062660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.062666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.062956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.062963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.063250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.063257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.063532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.063540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.063847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.063854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.064134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.064141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.064448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.064454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.064774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.064781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.065058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.065065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.065310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.065317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.065619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.065626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.065818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.559 [2024-10-01 16:54:55.065825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.559 qpair failed and we were unable to recover it. 00:30:03.559 [2024-10-01 16:54:55.066091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.066098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.066256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.066263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.066551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.066558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.066823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.066830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.067172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.067180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.067500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.067507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.067816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.067823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.068103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.068110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.068390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.068396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.068661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.068668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.069018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.069025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.069300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.069307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.069580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.069587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.069930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.069937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.070225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.070232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.070533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.070540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.070710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.070717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.070900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.070907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.071149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.071156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 [2024-10-01 16:54:55.071310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.560 [2024-10-01 16:54:55.071317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde68000b90 with addr=10.0.0.2, port=4420 00:30:03.560 qpair failed and we were unable to recover it. 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Write completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 Read completed with error (sct=0, sc=8) 00:30:03.560 starting I/O failed 00:30:03.560 [2024-10-01 16:54:55.071515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:03.560 [2024-10-01 16:54:55.071870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.071888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.072159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.072189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.072567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.072576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.072862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.072870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.073316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.073345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.073672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.073681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.073977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.073985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.074372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.074401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.074676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.074684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.075177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.075207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.075532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.075541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.075916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.075924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.076096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.076105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.076228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.076235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.076609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.076617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.076782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.076789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.077027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.077034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.077348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.077355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.077642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.077650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.077955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.077963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.078259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.078267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.078546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.078553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.078864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.078871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.079184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.079191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.079503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.079510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.079795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.079802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.080106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.080113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.080403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.080410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.080758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.080765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.081055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.081062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.081358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.081365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.081648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.081655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.081985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.081992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.082349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.561 [2024-10-01 16:54:55.082356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.561 qpair failed and we were unable to recover it. 00:30:03.561 [2024-10-01 16:54:55.082672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.082679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.082966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.082976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.083269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.083276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.083577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.083584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.083751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.083757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.084031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.084039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.084323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.084331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.084606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.084614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.084915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.084925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.085179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.085188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.085537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.085546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.085842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.085851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.086137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.086146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.086426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.086434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.086706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.086715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.087024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.087033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.087334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.087342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.087638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.087647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.087826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.087836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.088121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.088130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.088451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.088461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.088628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.088638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.088813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.088824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.088920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.088929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.089207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.089217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.089518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.089527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.089841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.089851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.090108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.090117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.090341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.562 [2024-10-01 16:54:55.090351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.562 qpair failed and we were unable to recover it. 00:30:03.562 [2024-10-01 16:54:55.090572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.090581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.090866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.090876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.091065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.091075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.091366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.091377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.091675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.091685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.091854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.091863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.092126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.092136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.092445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.092455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.092760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.092769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.092929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.092938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.093227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.093238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.093434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.093445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.093717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.093727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.094020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.094030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.094350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.094360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.094629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.094638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.094824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.094834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.095130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.095140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.095457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.095466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.095754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.095763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.096038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.096048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.096271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.096280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.096531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.096540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.096859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.096869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.097026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.563 [2024-10-01 16:54:55.097036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.563 qpair failed and we were unable to recover it. 00:30:03.563 [2024-10-01 16:54:55.097323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.097333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.097618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.097627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.097945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.097954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.098356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.098365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.098683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.098692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.099005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.099014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.099312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.099320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.099590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.099599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.099922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.099930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.100336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.100345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.100478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.100487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.100775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.100784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.101074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.101083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.101369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.101377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.101694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.101702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.101978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.101986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.102286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.102295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.102570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.102578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.102724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.102734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.103011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.103020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.103288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.103296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.103544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.103552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.103864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.103872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.104069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.104077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.104137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.104145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.104322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.104331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.104613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.104622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.104917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.104926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.105098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.105108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.105375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.105384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.105680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.564 [2024-10-01 16:54:55.105688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.564 qpair failed and we were unable to recover it. 00:30:03.564 [2024-10-01 16:54:55.105873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.105881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.106166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.106175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.106469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.106477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.106740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.106748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.106951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.106959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.107082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.107090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.107364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.107373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.107680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.107689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.107942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.107951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.108267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.108276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.108577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.108586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.108880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.108889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.109158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.109166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.109479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.109488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.109792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.109801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.109975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.109983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.110177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.110185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.110481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.110490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.110653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.110661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.110933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.110941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.111216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.111224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.111530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.111538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.111796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.111805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.112113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.112121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.112300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.112308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.112589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.112598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.112894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.112902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.113068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.113077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.113361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.113369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.113653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.113661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.113953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.113962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.114114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.114123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.114309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.114317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.114683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.565 [2024-10-01 16:54:55.114691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.565 qpair failed and we were unable to recover it. 00:30:03.565 [2024-10-01 16:54:55.114957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.114964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.115131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.115139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.115295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.115304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.115571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.115580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.115914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.115922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.116188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.116196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.116511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.116519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.116792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.116800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.117113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.117122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.117408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.117416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.117712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.117721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.117860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.117869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.118166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.118175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.118478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.118488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.118798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.118808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.119080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.119090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.119329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.119338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.119607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.119617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.119910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.119918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.120247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.120256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.120528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.120537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.120803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.120813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.121108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.121117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.121428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.121435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.121708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.121717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.121866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.121876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.122087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.122096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.122384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.122394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.122652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.122660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.122847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.122855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.566 [2024-10-01 16:54:55.123150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.566 [2024-10-01 16:54:55.123159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.566 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.123443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.123451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.123510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.123516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.123667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.123677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.123955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.123963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.124254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.124263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.124573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.124582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.124852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.124861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.125125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.125134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.125409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.125417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.125582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.125592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.125921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.125930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.126235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.126245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.126523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.126532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.126685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.126694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.127009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.127017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.127197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.127206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.127528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.127536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.127830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.127839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.128139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.128147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.128410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.128419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.128734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.128743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.128992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.129001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.129306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.129315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.129601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.129610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.129910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.129919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.130218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.130226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.130501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.130510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.130772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.130782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.131089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.131098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.131276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.131284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.131483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.131491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.131614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.131622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.131895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.567 [2024-10-01 16:54:55.131903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.567 qpair failed and we were unable to recover it. 00:30:03.567 [2024-10-01 16:54:55.132094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.132102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.132366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.132375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.132686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.132695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.132876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.132885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.133988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.133998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.134217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.134226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.134486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.134495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.134769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.134778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.134958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.134967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.135261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.135269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.135460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.135469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.135643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.135651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.135910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.135918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.136197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.136207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.136505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.136514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.136796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.136806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.137129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.137138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.137411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.137419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.137727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.137736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.138011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.138020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.138314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.138322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.138617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.138626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.568 qpair failed and we were unable to recover it. 00:30:03.568 [2024-10-01 16:54:55.138815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.568 [2024-10-01 16:54:55.138824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.139110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.139118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.139288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.139297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.139478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.139486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.139673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.139682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.139960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.139971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.140253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.140262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.140546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.140554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.140820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.140828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.141113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.141121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.141422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.141430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.141721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.141730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.141901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.141910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.142188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.142197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.142495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.142503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.142802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.142811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.143109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.143119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.143382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.143390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.143688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.143697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.143964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.143976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.144310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.144319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.144577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.144585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.144869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.144879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.145160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.145168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.145353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.145362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.145550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.145558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.145878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.145886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.146214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.146222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.146520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.146527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.146772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.146780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.147029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.147037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.147208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.147216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.569 [2024-10-01 16:54:55.147478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.569 [2024-10-01 16:54:55.147487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.569 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.147778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.147787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.147978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.147988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.148250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.148259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.148573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.148582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.148863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.148872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.149167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.149176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.149349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.149358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.149645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.149655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.149935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.149945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.150213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.150222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.150503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.150512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.150702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.150711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.150901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.150910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.151197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.151207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.151473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.151483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.151781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.151790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.152056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.152067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.152356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.152365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.152679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.152688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.152957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.152966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.153226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.153236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.153531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.153541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.153812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.153821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.154011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.154021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.154282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.154291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.154568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.154577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.154749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.154758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.155042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.155052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.155324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.155333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.155604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.155615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.155909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.155919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.156184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.156194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.156503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.156512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.156798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.156807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.156989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.156999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.570 [2024-10-01 16:54:55.157042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.570 [2024-10-01 16:54:55.157049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.570 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.157335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.157344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.157641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.157649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.157957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.157965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.158260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.158270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.158560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.158568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.158838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.158847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.159139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.159147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.159437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.159447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.159725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.159733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.160028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.160038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.160334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.160343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.160593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.160602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.160869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.160877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.161235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.161244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.161516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.161525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.161795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.161804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.162095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.162104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.162401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.162409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.162683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.162692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.162867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.162876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.163186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.163195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.163459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.163468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.163773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.163782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.164081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.164089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.164367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.164377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.164539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.164547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.164845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.164853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.165134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.165143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.165434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.165443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.165706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.165714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.165983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.165992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.166258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.166266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.166524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.166533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.571 qpair failed and we were unable to recover it. 00:30:03.571 [2024-10-01 16:54:55.166692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.571 [2024-10-01 16:54:55.166703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.166975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.166984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.167365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.167373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.167667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.167677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.167967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.167978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.168281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.168290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.168508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.168516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.168810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.168818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.169111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.169120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.169395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.169403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.169689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.169697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.169895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.169903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.170179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.170188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.170441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.170449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.170620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.170630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.170898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.170907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.171090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.171099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.171419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.171428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.171712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.171722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.172008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.172017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.172285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.172294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.172562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.172570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.172875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.172883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.173157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.173166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.173434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.173442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.173732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.173742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.174007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.174016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.174305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.174315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.174607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.174615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.174890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.174898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.175184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.572 [2024-10-01 16:54:55.175193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.572 qpair failed and we were unable to recover it. 00:30:03.572 [2024-10-01 16:54:55.175486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.175495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.175795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.175804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.176112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.176122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.176301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.176310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.176586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.176594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.176863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.176872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.177115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.177125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.177413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.177423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.177693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.177703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.177977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.177987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.178251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.178260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.178534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.178543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.178831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.178841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.179109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.179117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.179388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.179396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.179687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.179695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.180012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.180022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.180307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.180315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.180609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.180618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.180886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.180894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.181182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.181201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.181357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.181366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.181668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.181677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.181862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.181870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.182143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.182151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.182436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.182444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.182718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.182727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.183027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.183036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.183364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.183373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.183677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.183686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.183989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.183997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.184323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.573 [2024-10-01 16:54:55.184333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.573 qpair failed and we were unable to recover it. 00:30:03.573 [2024-10-01 16:54:55.184725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.184733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.185025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.185033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.185315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.185323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.185617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.185627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.185922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.185932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.186291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.186300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.186572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.186580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.186870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.186880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.187165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.187175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.187451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.187461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.187752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.187761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.188028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.188036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.188332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.188341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.188602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.188611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.188927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.188936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.189243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.189251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.189514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.189522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.189801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.189811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.190088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.190097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.190386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.190394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.190702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.190712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.190983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.190991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.191257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.191265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.191549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.191557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.191749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.191757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.192017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.192026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.192312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.192320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.192586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.192595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.192885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.574 [2024-10-01 16:54:55.192894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.574 qpair failed and we were unable to recover it. 00:30:03.574 [2024-10-01 16:54:55.193173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.193182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.193442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.193450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.193742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.193751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.194055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.194064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.194339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.194347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.194571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.194579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.194861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.194870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.195167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.195177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.195441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.195451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.195766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.195776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.196076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.196084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.196399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.196407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.196724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.196733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.196894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.196902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.197163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.197172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.197475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.197483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.197742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.197750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.197944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.197952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.198177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.198185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.198455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.198463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.198745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.198753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.199019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.199027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.199290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.199299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.199594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.199602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.199771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.199779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.200026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.200043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.200297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.200306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.200605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.200615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.200899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.200910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.201200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.201209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.575 qpair failed and we were unable to recover it. 00:30:03.575 [2024-10-01 16:54:55.201506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.575 [2024-10-01 16:54:55.201515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.201784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.201793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.202072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.202081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.202342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.202350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.202641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.202650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.202909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.202916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.203176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.203184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.203476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.203484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.203766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.203775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.204017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.204026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.204367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.204376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.204667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.204677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.205001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.205010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.205300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.205309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.205460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.205469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.205749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.205757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.206044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.206053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.206304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.206312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.206570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.206578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.206874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.206882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.207159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.207168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.207473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.207481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.207749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.207758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.208027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.208036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.208440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.208449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.576 [2024-10-01 16:54:55.208714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.576 [2024-10-01 16:54:55.208722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.576 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.208931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.208941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.209218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.209227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.209490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.209498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.209784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.209793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.210064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.210073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.210421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.210431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.210715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.210725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.211023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.211032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.211321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.211329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.211617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.211627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.211938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.211947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.212237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.212246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.212546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.212560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.212850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.212859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.213127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.213136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.213428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.213438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.213701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.213712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.213980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.213990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.214327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.214336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.214607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.214617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.214875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.214885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.215179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.851 [2024-10-01 16:54:55.215189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.851 qpair failed and we were unable to recover it. 00:30:03.851 [2024-10-01 16:54:55.215456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.215466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.215736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.215745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.216022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.216032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.216330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.216339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.216604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.216613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.216900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.216910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.217194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.217204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.217355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.217364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.217634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.217644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.217928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.217937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.218219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.218230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.218520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.218529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.218806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.218816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.218961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.218974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.219237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.219247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.219416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.219426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.219725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.219734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.219977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.219987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.220267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.220277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.220533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.220543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.220841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.220851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.221055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.221064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.221345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.221354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.221630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.221639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.221835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.221845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.222130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.222140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.222384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.222394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.222698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.222707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.223000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.223009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.223312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.223321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.223591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.223603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.223781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.223791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.224095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.852 [2024-10-01 16:54:55.224105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.852 qpair failed and we were unable to recover it. 00:30:03.852 [2024-10-01 16:54:55.224401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.224411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.224672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.224682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.224939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.224948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.225247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.225257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.225524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.225533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.225686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.225696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.225983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.225993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.226182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.226191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.226486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.226496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.226804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.226814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.227072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.227081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.227375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.227385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.227638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.227648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.227922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.227932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.228230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.228240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.228518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.228528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.228806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.228816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.229113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.229122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.229426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.229435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.229718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.229726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.230011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.230020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.230319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.230327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.230597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.230605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.230894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.230902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.231083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.231094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.231287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.231297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.231607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.231616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.231836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.231843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.232249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.232258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.232553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.232562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.232746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.232755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.232891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.232899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.233213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.233222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.233475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.233483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.233739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.233747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.234015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.234024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.234324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.234333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.234605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.853 [2024-10-01 16:54:55.234615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.853 qpair failed and we were unable to recover it. 00:30:03.853 [2024-10-01 16:54:55.234866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.234875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.235172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.235180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.235453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.235461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.235789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.235798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.236095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.236104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.236373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.236382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.236641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.236649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.236951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.236960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.237271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.237279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.237571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.237580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.237694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.237704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.237998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.238007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.238295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.238304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.238584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.238592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.238889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.238899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.239193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.239202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.239480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.239488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.239756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.239764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.240021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.240029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.240313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.240321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.240622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.240631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.240918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.240926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.241200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.241208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.241470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.241479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.241736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.241744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.242007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.242016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.242294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.242305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.242590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.242599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.242897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.242907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.243192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.243200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.243472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.243480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.243789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.243798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.244083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.244092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.244259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.244267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.244559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.244567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.244848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.244856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.854 [2024-10-01 16:54:55.245065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.854 [2024-10-01 16:54:55.245073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.854 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.245332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.245340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.245614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.245623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.245913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.245922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.246206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.246214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.246483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.246492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.246784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.246794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.247105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.247114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.247391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.247400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.247544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.247554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.247853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.247861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.248088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.248097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.248350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.248359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.248659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.248668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.248958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.248966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.249257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.249266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.249542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.249550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.249822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.249830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.250022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.250031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.250301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.250310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.250578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.250588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.250883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.250892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.251176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.251185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.251448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.251456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.251716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.251726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.252019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.252028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.252315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.252323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.252613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.252621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.252891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.252900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.253168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.253177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.253454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.253465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.253631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.253639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.253935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.253943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.254137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.254145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.254415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.254423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.254576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.254585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.855 [2024-10-01 16:54:55.254895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.855 [2024-10-01 16:54:55.254903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.855 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.255172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.255180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.255452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.255461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.255753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.255762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.256063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.256071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.256343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.256352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.256662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.256671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.256940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.256948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.257216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.257225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.257514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.257524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.257762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.257770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.258079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.258087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.258374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.258382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.258659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.258668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.258944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.258952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.259223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.259232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.259413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.259422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.259688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.259698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.259997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.260007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.260282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.260291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.260569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.260577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.260895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.260903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.261175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.261184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.261470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.261478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.261764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.261773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.262087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.262096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.262442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.262450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.262734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.262742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.263062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.263071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.263238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.263247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.263484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.263493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.263785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.263794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.856 [2024-10-01 16:54:55.263950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.856 [2024-10-01 16:54:55.263958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.856 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.264254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.264263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.264492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.264502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.264802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.264810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.265100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.265109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.265390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.265399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.265666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.265674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.265964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.265982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.266244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.266252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.266547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.266555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.266844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.266853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.267121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.267130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.267391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.267399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.267688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.267697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.267880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.267890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.268158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.268167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.268420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.268430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.268703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.268712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.268982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.268991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.269273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.269281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.269433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.269441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.269733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.269742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.269884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.269893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.270150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.270159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.270453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.270462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.270731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.270739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.271042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.271050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.271345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.271355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.271614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.271622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.271914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.271924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.272210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.272219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.272500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.272509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.272804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.857 [2024-10-01 16:54:55.272813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.857 qpair failed and we were unable to recover it. 00:30:03.857 [2024-10-01 16:54:55.273100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.273108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.273396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.273404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.273673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.273682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.273862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.273871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.274201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.274210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.274360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.274368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.274633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.274642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.274960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.274971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.275138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.275147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.275440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.275450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.275713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.275722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.275990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.275999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.276319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.276327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.276622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.276632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.276898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.276906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.277100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.277108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.277379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.277387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.277657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.277665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.277894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.277902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.278141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.278149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.278456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.278464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.278744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.278752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.279053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.279062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.279342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.279352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.279617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.279625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.279915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.279924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.280191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.280199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.280496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.280504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.280795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.280803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.281170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.281180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.281466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.281475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.281766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.281775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.281927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.281935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.282172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.282181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.282351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.282361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.282617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.282626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.282932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.282942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.283179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.858 [2024-10-01 16:54:55.283187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.858 qpair failed and we were unable to recover it. 00:30:03.858 [2024-10-01 16:54:55.283456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.283464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.283739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.283749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.284041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.284050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.284333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.284341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.284621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.284629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.284904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.284911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.285191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.285200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.285495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.285505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.285805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.285814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.286101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.286109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.286428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.286437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.286723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.286735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.286992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.287001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.287288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.287298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.287586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.287595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.287880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.287888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.288153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.288162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.288445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.288462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.288721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.288729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.288997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.289006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.289322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.289331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.289588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.289596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.289813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.289822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.290102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.290111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.290395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.290404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.290672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.290680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.290977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.290986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.291279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.291287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.291568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.291576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.291865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.291874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.292138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.292146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.292411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.292419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.292709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.292718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.292986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.859 [2024-10-01 16:54:55.292995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.859 qpair failed and we were unable to recover it. 00:30:03.859 [2024-10-01 16:54:55.293295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.293303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.293593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.293601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.293801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.293810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.294119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.294128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.294435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.294444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.294714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.294722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.294829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.294836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.295124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.295133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.295409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.295418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.295681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.295689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.295983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.295991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.296266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.296275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.296546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.296554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.296816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.296824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.297119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.297128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.297330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.297338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.297582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.297590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.297836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.297846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.298156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.298165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.298430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.298438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.298739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.298747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.299013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.299021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.299325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.299334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.299590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.299597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.299945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.299954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.300243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.300252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.300521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.300529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.300800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.300809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.300977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.300987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.301267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.301277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.301545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.301553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.301880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.301889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.302188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.302197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.302506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.302514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.302797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.302806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.860 qpair failed and we were unable to recover it. 00:30:03.860 [2024-10-01 16:54:55.303077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.860 [2024-10-01 16:54:55.303085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.303370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.303378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.303671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.303679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.303987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.303995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.304279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.304286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.304574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.304583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.304851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.304860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.305080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.305088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.305375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.305383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.305691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.305700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.305978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.305986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.306278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.306287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.306606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.306615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.306904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.306913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.307212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.307221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.307488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.307498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.307754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.307762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.308064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.308072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.308363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.308372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.308635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.308643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.308932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.308941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.309223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.309232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.309509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.309520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.309778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.309787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.310083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.310092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.310393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.310402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.310659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.310669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.310938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.310947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.311216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.311225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.311517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.311527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.311783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.311792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.312084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.312093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.312380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.312389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.312723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.312732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.861 [2024-10-01 16:54:55.313003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.861 [2024-10-01 16:54:55.313011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.861 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.313228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.313236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.313553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.313561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.313841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.313849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.314145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.314154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.314471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.314479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.314786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.314795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.315074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.315083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.315352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.315360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.315636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.315644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.315905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.315913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.316192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.316201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.316472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.316480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.316638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.316648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.316939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.316947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.317220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.317229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.317519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.317527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.317811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.317819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.318094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.318102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.318403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.318412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.318686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.318695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.318966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.318977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.319244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.319252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.319512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.319520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.319777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.319785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.320085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.320093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.320365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.320374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.320635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.320643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.320932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.320942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.321210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.321219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.321391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.321401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.321684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.321692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.321962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.321974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.322249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.322257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.322548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.322557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.322715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.322724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.323016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.323025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.323329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.323337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.323646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.862 [2024-10-01 16:54:55.323654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.862 qpair failed and we were unable to recover it. 00:30:03.862 [2024-10-01 16:54:55.323929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.323937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.324217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.324226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.324400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.324408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.324689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.324698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.324951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.324960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.325145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.325153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.325464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.325472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.325700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.325708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.326020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.326029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.326339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.326347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.326609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.326618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.326912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.326920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.327209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.327218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.327512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.327521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.327785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.327793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.328056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.328064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.328369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.328378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.328648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.328656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.328960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.328972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.329246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.329254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.329537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.329546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.329804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.329813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.330115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.330125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.330475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.330484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.330754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.330763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.331021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.331030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.331300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.331308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.331576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.331585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.331877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.331885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.332070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.332081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.332372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.332381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.332563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.332571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.332807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.332816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.333117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.333125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.333416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.333425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.333701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.333710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.333999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.334008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.863 qpair failed and we were unable to recover it. 00:30:03.863 [2024-10-01 16:54:55.334347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.863 [2024-10-01 16:54:55.334356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.334630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.334638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.334904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.334912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.335244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.335253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.335511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.335520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.335833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.335842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.336037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.336046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.336333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.336343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.336636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.336644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.336935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.336943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.337246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.337255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.337562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.337570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.337860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.337869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.338167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.338176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.338368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.338377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.338684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.338693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.338887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.338894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.339095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.339103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.339450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.339458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.339772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.339781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.339964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.339975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.340287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.340296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.340565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.340574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.340844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.340852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.341012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.341020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.341316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.341324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.341515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.341523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.341815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.341823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.342085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.342093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.342379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.342389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.342663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.342671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.342922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.342929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.343143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.343154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.343428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.343437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.343695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.343704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.343928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.343938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.343984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.343993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.344310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.344320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.864 qpair failed and we were unable to recover it. 00:30:03.864 [2024-10-01 16:54:55.344639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.864 [2024-10-01 16:54:55.344648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.344823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.344833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.345093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.345102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.345161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.345168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.345431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.345439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.345697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.345705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.346009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.346017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.346310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.346320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.346643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.346652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.346960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.346968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.347248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.347257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.347533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.347542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.347817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.347825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.348128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.348136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.348450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.348459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.348742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.348751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.349021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.349030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.349302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.349311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.349610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.349619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.349926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.349934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.350217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.350226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.350516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.350525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.350712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.350719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.350989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.350998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.351326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.351334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.351641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.351649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.351933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.351941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.352240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.352250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.352560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.352569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.352877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.352887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.353078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.353087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.353252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.353261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.353551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.353560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.353820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.353829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.865 [2024-10-01 16:54:55.354130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.865 [2024-10-01 16:54:55.354141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.865 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.354321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.354329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.354622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.354630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.354965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.354977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.355267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.355275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.355569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.355578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.355748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.355757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.356012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.356021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.356275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.356283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.356531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.356541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.356814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.356823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.357010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.357018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.357295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.357305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.357586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.357595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.357880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.357888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.358153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.358162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.358331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.358340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.358631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.358639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.358911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.358920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.359204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.359213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.359506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.359515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.359677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.359686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.359905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.359914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.360209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.360219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.360488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.360497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.360767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.360775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.361084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.361093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.361275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.361282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.361559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.361568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.361725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.361735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.362042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.362051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.362333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.362341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.362633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.362641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.362825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.362833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.363118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.363127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.363397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.363405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.363674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.363682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.363857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.363864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.364147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.364155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.866 qpair failed and we were unable to recover it. 00:30:03.866 [2024-10-01 16:54:55.364443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.866 [2024-10-01 16:54:55.364451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.364755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.364765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.365064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.365073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.365371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.365379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.365720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.365729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.365996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.366005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.366164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.366172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.366499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.366508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.366780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.366966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.366984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.367264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.367272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.367463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.367471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.367758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.367767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.367921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.367929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.368136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.368144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.368423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.368431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.368612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.368620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.368901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.368911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.369193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.369202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.369471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.369479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.369660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.369669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.369964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.369975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.370270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.370278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.370443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.370453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.370589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.370598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.370760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.370770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.371073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.371081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.371381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.371388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.371683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.371694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.371965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.371981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.372253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.372261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.372539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.372548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.372816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.372825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.372992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.373000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.373130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.373138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.373423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.373432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.373746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.373754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.374018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.374028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.867 [2024-10-01 16:54:55.374339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.867 [2024-10-01 16:54:55.374348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.867 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.374398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.374404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.374671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.374680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.374979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.374988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.375308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.375316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.375625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.375633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.375902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.375910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.376204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.376213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.376545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.376554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.376846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.376855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.377122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.377130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.377429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.377438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.377500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.377509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.377766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.377776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.377875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.377883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.378267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.378276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.378549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.378558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.378880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.378888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.379198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.379206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.379368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.379376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.379653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.379661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.379910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.379918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.380107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.380116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.380440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.380448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.380623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.380631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.380902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.380911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.381075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.381084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.381388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.381397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.381672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.381680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.381996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.382004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.382289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.382298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.382574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.382583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.382876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.382885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.383148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.383157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.383234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.383242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.383293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.383300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.383598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.383606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.383862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.868 [2024-10-01 16:54:55.383871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.868 qpair failed and we were unable to recover it. 00:30:03.868 [2024-10-01 16:54:55.384075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.384083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.384351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.384360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.384653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.384661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.385025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.385034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.385186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.385194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.385475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.385484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.385795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.385804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.386096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.386105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.386280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.386288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.386628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.386637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.386920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.386928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.387222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.387231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.387506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.387514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.387669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.387678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.387944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.387954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.388136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.388144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.388418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.388427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.388580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.388597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.388770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.388780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.389035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.389044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.389239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.389247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.389547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.389555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.389859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.389867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.390174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.390183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.390472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.390481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.390764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.390773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.391001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.391010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.391305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.391315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.391638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.391648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.391964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.391976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.392155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.392163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.392447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.392457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.392750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.392761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.393056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.393065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.393376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.393385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.393610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.393619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.393916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.393926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.394220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.394229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.394548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.869 [2024-10-01 16:54:55.394556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.869 qpair failed and we were unable to recover it. 00:30:03.869 [2024-10-01 16:54:55.394851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.394859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.395138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.395147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.395453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.395463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.395717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.395726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.396015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.396024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.396316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.396325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.396615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.396625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.396930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.396939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.397241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.397250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.397546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.397555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.397829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.397837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.398140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.398148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.398447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.398457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.398770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.398779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.399098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.399108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.399400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.399409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.399763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.399771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.400078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.400086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.400374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.400383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.400624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.400633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.400907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.400915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.401219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.401229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.401494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.401503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.401785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.401794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.402081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.402090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.402380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.402389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.402671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.402679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.402977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.402986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.403261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.403270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.403561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.403570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.403835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.403844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.404129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.404138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.404446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.404455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.870 [2024-10-01 16:54:55.404741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.870 [2024-10-01 16:54:55.404754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.870 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.405059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.405068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.405387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.405396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.405654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.405663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.405958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.405967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.406267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.406276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.406563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.406571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.406884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.406893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.407215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.407225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.407513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.407522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.407826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.407835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.408037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.408045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.408293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.408300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.408573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.408581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.408853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.408861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.409124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.409133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.409421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.409430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.409695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.409704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.409996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.410005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.410290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.410299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.410592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.410601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.410889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.410897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.411185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.411194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.411505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.411514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.411796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.411804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.412093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.412103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.412351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.412361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.412624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.412633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.412932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.412941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.413115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.413125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.413400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.413409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.413682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.413690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.413962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.413977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.414266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.414276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.414579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.414588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.414923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.414932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.415245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.415254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.415509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.415518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.415781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.871 [2024-10-01 16:54:55.415789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.871 qpair failed and we were unable to recover it. 00:30:03.871 [2024-10-01 16:54:55.416080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.416088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.416395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.416406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.416667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.416675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.416942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.416950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.417233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.417242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.417509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.417517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.417811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.417820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.418095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.418103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.418375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.418383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.418647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.418655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.418933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.418941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.419233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.419242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.419532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.419541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.419814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.419822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.420089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.420097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.420405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.420414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.420677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.420686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.420883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.420892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.421148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.421157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.421456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.421464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.421736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.421744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.421940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.421949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.422229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.422239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.422509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.422517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.422845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.422854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.423161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.423169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.423478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.423486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.423749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.423757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.424029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.424039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.424318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.424326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.424616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.424626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.424923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.424932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.425242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.425252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.425432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.425441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.425728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.425736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.426009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.426018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.426284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.426292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.426508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.426517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.872 [2024-10-01 16:54:55.426830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.872 [2024-10-01 16:54:55.426840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.872 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.427144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.427153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.427322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.427330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.427645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.427655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.427946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.427955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.428133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.428142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.428426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.428434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.428717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.428725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.428992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.429001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.429281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.429290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.429579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.429589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.429774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.429782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.430135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.430143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.430448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.430456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.430737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.430747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.431012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.431022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.431309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.431317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.431610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.431620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.431901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.431909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.432199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.432208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.432484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.432492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.432756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.432764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.433017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.433026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.433308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.433316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.433632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.433640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.433935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.433944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.434199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.434207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.434369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.434377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.434615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.434624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.434890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.434898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.435180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.435188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.435459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.435467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.435743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.435762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.436064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.436072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.436377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.436387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.436663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.436671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.436964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.436977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.437295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.437304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.437575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.437584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-10-01 16:54:55.437881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.873 [2024-10-01 16:54:55.437890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.438232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.438242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.438514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.438522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.438671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.438681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.438971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.438982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.439278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.439286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.439483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.439492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.439767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.439776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.440055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.440064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.440402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.440412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.440702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.440710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.440978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.440986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.441280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.441289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.441555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.441563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.441879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.441888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.442177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.442185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.442480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.442488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.442795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.442805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.443067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.443075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.443294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.443302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.443611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.443620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.443925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.443934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.444222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.444230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.444465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.444473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.444744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.444754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.444916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.444925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.445180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.445190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.445470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.445479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.445829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.445839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.446129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.446138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.446396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.446404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.446694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.446702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.446971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.446980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.447243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.874 [2024-10-01 16:54:55.447251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-10-01 16:54:55.447529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.447546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.447798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.447806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.448104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.448113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.448395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.448404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.448672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.448680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.448991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.449000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.449259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.449267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.449534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.449545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.449808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.449818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.450108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.450118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.450403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.450415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.450760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.450770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.451057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.451066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.451378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.451388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.451696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.451704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.451993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.452009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.452292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.452301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.452478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.452486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.452744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.452753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.453051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.453060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.453319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.453328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.453625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.453635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.454999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.455019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.455324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.455335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.455625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.455634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.455925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.455934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.456212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.456221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.456407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.456416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.456709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.456717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.457024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.457034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.457194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.457204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.457436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.457444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.457642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.457651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.457931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.457940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.458229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.458239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.458523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.458531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.458787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.458795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.459084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.459093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-10-01 16:54:55.459362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.875 [2024-10-01 16:54:55.459371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.459654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.459662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.459945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.459955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.460248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.460257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.460543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.460552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.460819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.460828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.461092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.461101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.461383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.461391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.461661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.461671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.461831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.461840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.462095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.462104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.462426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.462438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.462621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.462631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.462820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.462828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.463111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.463120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.463402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.463411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.463598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.463606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.463920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.463929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.464187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.464196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.464487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.464496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.464836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.464844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.465130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.465139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.465315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.465325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.465493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.465501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.465794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.465802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.466058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.466067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.466375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.466384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.466682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.466692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.466855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.466865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.467137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.467146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.467419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.467427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.467678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.467687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.467978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.467987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.468181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.468190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.468479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.468488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.468796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.468804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.469095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.469104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.469255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.469264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.469563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.876 [2024-10-01 16:54:55.469571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.876 qpair failed and we were unable to recover it. 00:30:03.876 [2024-10-01 16:54:55.469755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.469763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.470036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.470044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.470315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.470324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.470603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.470617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.470904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.470912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.471187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.471196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.471507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.471515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.471797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.471807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.472096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.472106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.472399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.472409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.472699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.472709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.472995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.473004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.473331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.473340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.473617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.473630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.473914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.473924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.474199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.474208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.474507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.474515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.474782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.474791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.475060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.475068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.475218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.475228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.475509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.475517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.475813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.475822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.476084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.476092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.476407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.476416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.476682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.476692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.476980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.476990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.477271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.477280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.477553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.477561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.477848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.477857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.478115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.478125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.478427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.478436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.478732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.478741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.479055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.479064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.479375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.479384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.479665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.479673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.479829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.479837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.480103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.480112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.480269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.480279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.877 [2024-10-01 16:54:55.480653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.877 [2024-10-01 16:54:55.480744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.877 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.481261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.481354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.481690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.481700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.481979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.481987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.482279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.482288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.482548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.482556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.482826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.482834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.483148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.483158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.483469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.483477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.483754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.483771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.484036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.484044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.484360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.484370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.484678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.484687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.484975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.484985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.485270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.485279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.485547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.485557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.485740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.485748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.486019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.486027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.486233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.486242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.486516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.486524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.486826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.486834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.487004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.487012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.487301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.487309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.487625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.487634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.487892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.487901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.488180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.488188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.488442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.488450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.488725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.488733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.489010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.489019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.489305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.489314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.489625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.489635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.489907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.489915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.490175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.490183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.490457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.490465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.490739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.490747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.491014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.491023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.491294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.491302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.491560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.491568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.878 qpair failed and we were unable to recover it. 00:30:03.878 [2024-10-01 16:54:55.491862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.878 [2024-10-01 16:54:55.491871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.492143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.492151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.492458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.492467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.492738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.492748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.493030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.493039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.493345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.493354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.493635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.493645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.493909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.493919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.494089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.494098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.494348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.494356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.494527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.494536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.494813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.494822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.495047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.495055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.495326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.495334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.495627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.495636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.495947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.495957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.496098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.496108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.496223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.496234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.496507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.496517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.496825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.496834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.497097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.497106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.497395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.497404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.497708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.497717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.497984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.497996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.498164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.498173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.498452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.498461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.498731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.498739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.499101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.499110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.499375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.499384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.499668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.499676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.499977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.499985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.500277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.500286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.500446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.500455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.500801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.500809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.879 qpair failed and we were unable to recover it. 00:30:03.879 [2024-10-01 16:54:55.501075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.879 [2024-10-01 16:54:55.501084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.501347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.501355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.501652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.501662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.501979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.501988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.502272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.502280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.502574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.502584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.502927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.502936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.503215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.503224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.503510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.503518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.503810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.503818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.504092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.504102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.504389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.504398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.504668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.504676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.504986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.504995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.505282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.505290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.505560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.505568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.505870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.505878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.506035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.506043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.506336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.506345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.506498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.506507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.506821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.506829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.507118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.507126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.507276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.507285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.507632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.507641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.507908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.507916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.508196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.508205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.508331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.508338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.508660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.508668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.508932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.508940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.509233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.509242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.509506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.509514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.509785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.509794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.510088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.510097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.510408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.510418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.510624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.510633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.510900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.510908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.511218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.511226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.511540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.511548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.880 [2024-10-01 16:54:55.511843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.880 [2024-10-01 16:54:55.511852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.880 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.512120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.512129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.512398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.512406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.512705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.512714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.512982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.512991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.513276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.513285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.513578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.513586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.513844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.513852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.514138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.514147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.514456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.514466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.514709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.514717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.515008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.515016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.515320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.515329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.515595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.515604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.515857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.515865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.516136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.516144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.516454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.516462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.516782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.516790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.517083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.517091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.517373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.517381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.517745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.517754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.518029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.518039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.518329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.518338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:03.881 [2024-10-01 16:54:55.518654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.881 [2024-10-01 16:54:55.518662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:03.881 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.518951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.518961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.519259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.519270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.519554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.519562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.519838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.519848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.520139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.520147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.520414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.520422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.520757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.520765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.521035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.521044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.521328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.521337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.521601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.521609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.521887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.521895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.522185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.522193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.522483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.522491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.522779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.522787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.523091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.523100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.523394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.523403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.523712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.523721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.524025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.524035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.524323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.524332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.524604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.524614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.524890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.524900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.525210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.525220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.525486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.525495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.525771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.525781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.526068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.526077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.526341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.526350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.526615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.526625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.526920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.156 [2024-10-01 16:54:55.526929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.156 qpair failed and we were unable to recover it. 00:30:04.156 [2024-10-01 16:54:55.527074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.527085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.527384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.527393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.527697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.527706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.528012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.528020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.528348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.528357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.528638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.528646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.528939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.528948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.529216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.529224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.529514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.529523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.529781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.529790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.530055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.530063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.530319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.530327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.530629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.530637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.530948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.530958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.531257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.531266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.531528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.531536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.531813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.531821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.532085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.532093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.532349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.532357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.532660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.532669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.532953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.532961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.533273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.533282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.533595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.533603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.533896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.533905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.534202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.534210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.534486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.534495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.534798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.534806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.535088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.535097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.535370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.535378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.535668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.535677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.535808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.535817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.536051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.536060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.536356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.536365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.536648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.536658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.536923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.536932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.537212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.537220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.537494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.537503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.537767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.157 [2024-10-01 16:54:55.537777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.157 qpair failed and we were unable to recover it. 00:30:04.157 [2024-10-01 16:54:55.538063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.538072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.538382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.538391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.538698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.538708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.539031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.539040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.539338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.539347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.539619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.539627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.539918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.539926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.540187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.540195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.540461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.540469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.540758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.540776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.541112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.541120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.541394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.541402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.541701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.541710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.542025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.542034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.542320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.542328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.542628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.542638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.542946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.542955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.543236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.543246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.543531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.543540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.543804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.543812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.544123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.544132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.544351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.544359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.544522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.544531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.544820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.544829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.545126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.545135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.545444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.545453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.545758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.545766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.545888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.545896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.546354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.546445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.546743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.546781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.547080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.547090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.547380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.547388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.547677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.547687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.547978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.547987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.548269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.548279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.548548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.548555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.548862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.548872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.549152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.549161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.158 qpair failed and we were unable to recover it. 00:30:04.158 [2024-10-01 16:54:55.549548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.158 [2024-10-01 16:54:55.549556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.549862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.549872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.550174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.550182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.550399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.550407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.550700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.550711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.551023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.551032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.551307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.551316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.551610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.551619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.551932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.551942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.552255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.552264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.552538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.552546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.552853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.552862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.553135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.553143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.553310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.553318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.553601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.553610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.553907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.553917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.554224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.554234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.554510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.554519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.554819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.554829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.555098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.555107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.555408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.555418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.555590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.555600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.555880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.555889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.556189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.556197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.556368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.556377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.556692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.556700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.557009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.557018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.557208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.557216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.557521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.557529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.557831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.557840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.558097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.558106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.558428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.558436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.558747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.558755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.559022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.559031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.559353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.559361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.559626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.559634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.559908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.559916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.560212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.560222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.560565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.159 [2024-10-01 16:54:55.560574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.159 qpair failed and we were unable to recover it. 00:30:04.159 [2024-10-01 16:54:55.560842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.560852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.561131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.561140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.561414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.561422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.561722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.561730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.562017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.562026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.562289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.562299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.562573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.562581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.562880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.562888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.563153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.563162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.563471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.563479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.563739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.563747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.564013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.564023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.564293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.564301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.564560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.564569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.564887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.564895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.565179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.565188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.565479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.565487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.565757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.565766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.566042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.566050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.566353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.566362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.566666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.566674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.566890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.566898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.567192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.567201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.567468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.567476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.567793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.567801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.568089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.568098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.568391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.568399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.568664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.568671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.568849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.568858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.569107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.569116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.569382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.569391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.569669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.569677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.569984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.160 [2024-10-01 16:54:55.569994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.160 qpair failed and we were unable to recover it. 00:30:04.160 [2024-10-01 16:54:55.570283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.570291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.570581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.570590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.570771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.570779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.571072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.571080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.571376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.571384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.571647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.571655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.571912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.571921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.572211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.572220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.572485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.572494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.572765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.572773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.573064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.573073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.573192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.573200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.573478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.573489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.573752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.573761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.574049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.574058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.574342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.574350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.574643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.574652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.574944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.574953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.575233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.575242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.575534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.575543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.575824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.575832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.575990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.575999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.576298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.576306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.576617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.576625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.576932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.576941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.577148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.577156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.577441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.577449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.577647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.577655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.577917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.577924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.578205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.578214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.578488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.578496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.578796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.578804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.579071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.579079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.579338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.579346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.579640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.579650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.579916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.579925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.580198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.580216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.580529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.580538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.161 [2024-10-01 16:54:55.580894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.161 [2024-10-01 16:54:55.580904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.161 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.581195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.581205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.581494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.581504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.581770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.581779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.582056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.582065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.582358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.582368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.582681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.582690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.582999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.583008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.583271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.583279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.583550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.583558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.583743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.583751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.584056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.584064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.584348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.584356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.584634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.584642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.584985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.584995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.585269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.585277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.585506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.585514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.585815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.585824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.586018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.586033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.586293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.586301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.586585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.586593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.586881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.586889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.587160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.587168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.587458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.587466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.587738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.587746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.587913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.587922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.588100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.588109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.588404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.588412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.588585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.588593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.588881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.588891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.589154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.589162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.589438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.589445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.589713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.589722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.590006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.590015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.590410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.590418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.590706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.590714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.590907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.162 [2024-10-01 16:54:55.590915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.162 qpair failed and we were unable to recover it. 00:30:04.162 [2024-10-01 16:54:55.591202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.591210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.591517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.591526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.591832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.591841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.592147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.592156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.592471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.592481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.592750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.592759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.593081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.593090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.593400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.593408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.593715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.593724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.593997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.594005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.594218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.594225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.594521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.594529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.594866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.594874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.595141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.595149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.595444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.595452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.595726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.595734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.595922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.595930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.596207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.596218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.596533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.596542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.596875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.596884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.597167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.597176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.597462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.597471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.597764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.597773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.598064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.598072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.598250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.598259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.598495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.598505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.598816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.598825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.598889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.598897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.599165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.599175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.599347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.599355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.599520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.599528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.599721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.599729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.599904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.599913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.600197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.600206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.163 [2024-10-01 16:54:55.600499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.163 [2024-10-01 16:54:55.600508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.163 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.600803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.600812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.601035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.601043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.601322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.601331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.601628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.601636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.601930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.601940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.602261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.602269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.602520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.602528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.602806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.602815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.603102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.603110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.603402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.603411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.603685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.603694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.603953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.603960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.604146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.604156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.604318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.604327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.604662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.604671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.604956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.604966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.605269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.605279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.605543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.605551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.605843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.605851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.606131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.606139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.606440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.606449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.606740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.606748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.607059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.607078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.607361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.607369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.607638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.607646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.607918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.607928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.608080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.608089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.608448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.608456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.608772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.608781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.608938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.608948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.609219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.609227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.609508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.609516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.609780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.609788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.610087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.610096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.610290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.610298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.610571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.164 [2024-10-01 16:54:55.610579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.164 qpair failed and we were unable to recover it. 00:30:04.164 [2024-10-01 16:54:55.610892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.610900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.611190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.611198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.611348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.611357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.611656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.611665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.611861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.611869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.612146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.612155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.612351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.612360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.612565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.612574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.612731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.612739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.613022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.613030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.613324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.613332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.613625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.613634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.613930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.613938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.614230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.614239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.614518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.614527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.614709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.614717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.615007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.615015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.615167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.615175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.615456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.615465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.615769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.615778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.616033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.616041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.616223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.616232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.616489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.616499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.616764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.616773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.616952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.616959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.617246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.617255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.617553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.617562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.617869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.617877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.618144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.618152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.618345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.618355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.165 [2024-10-01 16:54:55.618645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-10-01 16:54:55.618655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.165 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.618936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.618946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.619123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.619131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.619429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.619437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.619721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.619729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.620009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.620018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.620082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.620089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.620391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.620399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.620633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.620642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.620881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.620890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.621164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.621174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.621490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.621499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.621674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.621684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.621976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.621986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.622163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.622172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.622381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.622390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.622733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.622742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.623005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.623013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.623294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.623302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.623459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.623466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.623760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.623769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.624057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.624066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.624367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.624376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.624543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.624553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.624689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.624697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.624990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.624999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.625259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.625268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.625562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.625570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.625849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.625857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.626131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.626141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.626339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.626348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.626391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.626399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.626678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.626688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.626961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.626973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.627265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.627273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.166 [2024-10-01 16:54:55.627545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.166 [2024-10-01 16:54:55.627554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.166 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.627824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.627833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.628033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.628042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.628237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.628246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.628551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.628560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.628853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.628862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.629077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.629086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.629359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.629368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.629666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.629676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.629990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.629999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.630287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.630296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.630595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.630604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.630789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.630798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.630974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.630982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.631237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.631247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.631556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.631565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.631716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.631726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.632001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.632010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.632183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.632192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.632476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.632486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.632773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.632782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.633099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.633108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.633376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.633385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.633684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.633692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.634002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.634011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.634204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.634213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.634514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.634523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.634794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.634804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.635099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.635110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.635405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.635415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.635520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.635528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.635668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.635677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.635980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.635992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.636290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.636299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.636478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.636486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.167 qpair failed and we were unable to recover it. 00:30:04.167 [2024-10-01 16:54:55.636623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.167 [2024-10-01 16:54:55.636632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.636783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.636792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.637061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.637071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.637372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.637383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.637667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.637675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.637993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.638002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.638276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.638285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.638599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.638607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.638913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.638922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.638960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.638972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.639246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.639255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.639438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.639445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.639730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.639740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.639926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.639936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.640230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.640239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.640513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.640522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.640836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.640844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.641126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.641134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.641319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.641329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.641622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.641632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.641939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.641948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.642178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.642186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.642440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.642450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.642769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.642778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.643047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.643055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.643344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.643352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.643623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.643632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.643958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.643968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.644256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.644265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.644618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.644626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.644917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.644925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.645192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.645201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.645473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.645482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.645691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.645701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.646012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.646021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.646288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.646296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.646566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.646575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.646760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.646769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.647074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.168 [2024-10-01 16:54:55.647082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.168 qpair failed and we were unable to recover it. 00:30:04.168 [2024-10-01 16:54:55.647336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.647345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.647637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.647645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.647956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.647965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.648024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.648032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.648300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.648308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.648563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.648573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.648845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.648855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.649167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.649177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.649496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.649506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.649816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.649825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.649983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.649993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.650165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.650174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.650448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.650458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.650746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.650756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.651038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.651047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.651331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.651339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.651611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.651619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.651921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.651930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.652193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.652202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.652482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.652490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.652780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.652788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.652965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.652977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.653254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.653262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.653577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.653585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.653849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.653857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.654022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.654030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.654202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.654211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.654532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.654541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.654852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.654860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.655128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.655136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.655450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.655458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.655624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.655632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.655920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.655928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.656210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.169 [2024-10-01 16:54:55.656220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.169 qpair failed and we were unable to recover it. 00:30:04.169 [2024-10-01 16:54:55.656509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.656519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.656814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.656823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.657070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.657078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.657391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.657400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.657600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.657609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.657918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.657927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.658100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.658109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.658382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.658391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.658671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.658680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.658961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.658973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.659132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.659140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.659393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.659402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.659692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.659700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.659924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.659932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.660196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.660206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.660526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.660536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.660751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.660760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.661043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.661052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.661394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.661404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.661690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.661699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.661989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.661998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.662287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.662296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.662492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.662500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.662776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.662785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.662975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.662984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.663266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.663274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.663459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.663467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.663740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.663749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.664072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.664080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.664238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.664247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.664552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.664560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.664853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.664860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.665158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.665167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.170 [2024-10-01 16:54:55.665460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.170 [2024-10-01 16:54:55.665468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.170 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.665781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.665790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.666095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.666104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.666369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.666378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.666694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.666703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.666867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.666876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.667039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.667048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.667244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.667254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.667552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.667562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.667736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.667745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.668031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.668040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.668340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.668349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.668632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.668641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.668822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.668831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.669133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.669141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.669440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.669449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.669754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.669764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.670040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.670049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.670342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.670350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.670648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.670656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.670931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.670940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.671237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.671245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.671551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.671560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.671744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.671752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.672026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.672034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.672178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.672187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.672430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.672438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.672720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.672730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.672991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.673000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.673331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.673339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.673620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.673628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.673916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.673924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.674192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.674201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.674458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.674467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.674745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.674762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.675028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.675036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.675304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.675311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.675583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.675592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.675864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.675874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.171 qpair failed and we were unable to recover it. 00:30:04.171 [2024-10-01 16:54:55.676176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.171 [2024-10-01 16:54:55.676185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.676363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.676372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.676549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.676557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.676843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.676852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.677118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.677127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.677452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.677461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.677747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.677756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.677979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.677987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.678242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.678251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.678435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.678445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.678757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.678766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.679027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.679035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.679303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.679311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.679381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.679388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.679693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.679701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.679897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.679906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.680229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.680238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.680503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.680512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.680822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.680831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.681147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.681156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.681412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.681420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.681688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.681696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.681978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.681987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.682282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.682290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.682559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.682568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.682812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.682820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.683101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.683109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.683348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.683357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.683615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.683623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.683927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.683936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.684215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.684223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.684519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.684528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.684833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.684843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.685136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.685144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.685315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.685324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.685599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.685608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.685883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.685891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.686198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.686207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.172 [2024-10-01 16:54:55.686481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.172 [2024-10-01 16:54:55.686490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.172 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.686673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.686682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.686963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.686975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.687265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.687274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.687540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.687548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.687808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.687816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.688127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.688136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.688409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.688418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.688683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.688691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.688923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.688931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.689119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.689129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.689376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.689386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.689638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.689647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.689896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.689905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.690096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.690106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.690376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.690384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.690646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.690657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.690919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.690928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.691232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.691242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.691521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.691531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.691675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.691685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.691956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.691965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.692270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.692280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.692559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.692569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.692873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.692883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.693159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.693170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.693444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.693453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.693753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.693762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.694082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.694092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.694401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.694410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.694699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.694708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.695055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.695064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.695350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.695359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.695532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.695541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.695845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.695853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.696087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.696097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.696375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.696384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.696674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.696683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.696989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.173 [2024-10-01 16:54:55.696997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.173 qpair failed and we were unable to recover it. 00:30:04.173 [2024-10-01 16:54:55.697283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.697292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.697557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.697565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.697824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.697833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.698015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.698024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.698317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.698325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.698482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.698489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.698756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.698765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.698931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.698939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.699084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.699093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.699404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.699412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.699745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.699754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.700107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.700118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.700398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.700406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.700678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.700686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.700904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.700912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.701188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.701196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.701489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.701498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.701794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.701804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.702093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.702102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.702394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.702403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.702585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.702594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.702886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.702895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.703200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.703209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.703475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.703483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.703776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.703785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.704041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.704050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.704344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.704353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.704623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.704631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.704904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.704912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.705200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.705208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.705374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.705382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.705650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.705659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.705953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.174 [2024-10-01 16:54:55.705960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.174 qpair failed and we were unable to recover it. 00:30:04.174 [2024-10-01 16:54:55.706271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.706278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.706544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.706551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.706824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.706831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.707126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.707133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.707401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.707407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.707705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.707712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.708023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.708030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.708316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.708323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.708602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.708609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.708899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.708906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.709221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.709230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.709431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.709440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.709756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.709764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.709926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.709935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.710149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.710158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.710450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.710458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.710633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.710642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.710924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.710934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.711208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.711219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.711471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.711479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.711793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.711803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.712102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.712111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.712201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.712209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.712395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.712404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.712703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.712712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.712975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.712983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.713327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.713337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.713552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.713560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.713833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.713842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.714120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.714129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.714411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.714420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.714718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.714727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.714944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.714954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.715242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.715252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.715515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.715524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.715801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.715811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.716155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.716165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.716403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.716412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.175 qpair failed and we were unable to recover it. 00:30:04.175 [2024-10-01 16:54:55.716730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.175 [2024-10-01 16:54:55.716739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.717040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.717050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.717355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.717364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.717519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.717528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.717698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.717707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.717907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.717916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.718209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.718219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.718512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.718521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.718726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.718736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.718957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.718966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.719257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.719266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.719443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.719452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.719669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.719678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.719985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.719994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.720282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.720292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.720548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.720557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.720736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.720746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.720990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.720999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.721277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.721285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.721480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.721488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.721749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.721759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.721931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.721938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.722252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.722260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.722516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.722524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.722797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.722807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.723109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.723118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.723429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.723438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.723731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.723741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.723794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.723802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.723994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.724004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.724283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.724292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.724569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.724578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.724863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.724873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.725134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.725144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.725438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.725447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.725706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.725715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.725895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.725905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.726200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.726209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.726397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.176 [2024-10-01 16:54:55.726407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.176 qpair failed and we were unable to recover it. 00:30:04.176 [2024-10-01 16:54:55.726677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.726687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.726859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.726869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.727029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.727038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.727337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.727346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.727620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.727630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.727936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.727945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.728239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.728249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.728544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.728553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.728860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.728870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.729133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.729143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.729416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.729425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.729711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.729720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.730005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.730014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.730319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.730328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.730632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.730642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.730933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.730942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.731114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.731124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.731379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.731388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.731662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.731671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.731878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.731887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.732135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.732145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.732437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.732448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.732732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.732742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.733030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.733040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.733309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.733318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.733578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.733588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.733882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.733891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.734267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.734277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.734585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.734594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.734889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.734899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.735169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.735179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.735468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.735478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.735751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.735760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.735995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.736005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.736294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.736303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.736591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.736600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.736878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.736888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.737177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.737187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.177 qpair failed and we were unable to recover it. 00:30:04.177 [2024-10-01 16:54:55.737491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.177 [2024-10-01 16:54:55.737500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.737825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.737835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.738188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.738199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.738476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.738485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.738750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.738760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.739139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.739149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.739423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.739432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.739749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.739758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.740009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.740018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.740210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.740219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.740487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.740497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.740808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.740817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.741098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.741108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.741421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.741431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.741701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.741711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.741992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.742001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.742159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.742168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.742452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.742461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.742741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.742751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.742922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.742931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.743193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.743201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.743492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.743500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.743765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.743774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.744038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.744048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.744328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.744336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.744639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.744647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.744940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.744950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.745243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.745251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.745565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.745574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.745745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.745754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.746021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.746030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.746294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.746302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.746576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.746584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.746879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.746887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.747156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.747165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.747465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.747473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.747736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.747744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.748040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.748049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.748341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.178 [2024-10-01 16:54:55.748349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.178 qpair failed and we were unable to recover it. 00:30:04.178 [2024-10-01 16:54:55.748630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.748638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.748903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.748911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.749201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.749209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.749476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.749484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.749611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.749620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.749941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.749950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.750221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.750229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.750400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.750409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.750722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.750730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.751002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.751010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.751307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.751315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.751618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.751627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.751803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.751811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.752047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.752058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.752300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.752309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.752606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.752615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.752906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.752915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.753200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.753217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.753477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.753485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.753675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.753682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.753944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.753952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.754231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.754239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.754421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.754430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.754718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.754727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.755011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.755022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.755354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.755362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.755641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.755650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.755922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.755930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.756096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.756106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.756377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.756386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.756686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.756695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.756996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.757005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.757316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.757325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.757591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.179 [2024-10-01 16:54:55.757599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.179 qpair failed and we were unable to recover it. 00:30:04.179 [2024-10-01 16:54:55.757902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.757911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.758180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.758189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.758482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.758491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.758664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.758672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.758921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.758929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.759219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.759228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.759527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.759535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.759841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.759849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.760127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.760136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.760420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.760429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.760716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.760725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.761033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.761041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.761196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.761205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.761531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.761539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.761818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.761835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.762037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.762046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.762342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.762351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.762495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.762504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.762780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.762789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.763067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.763075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.763307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.763315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.763570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.763578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.763750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.763758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.764071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.764080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.764361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.764369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.764643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.764652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.764940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.764949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.765232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.765240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.765535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.765544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.765853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.765861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.766222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.766233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.766520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.766528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.766878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.766887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.767056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.767065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.767257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.767265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.767532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.767540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.767834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.767843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.768150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.768159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.180 qpair failed and we were unable to recover it. 00:30:04.180 [2024-10-01 16:54:55.768452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.180 [2024-10-01 16:54:55.768460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.768727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.768736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.769025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.769033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.769312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.769321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.769628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.769638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.769907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.769915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.770213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.770222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.770498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.770507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.770674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.770682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.770971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.770981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.771271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.771280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.771592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.771601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.771894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.771903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.772238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.772247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.772534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.772543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.772662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.772672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.772959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.772971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.773269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.773277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.773530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.773538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.773828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.773837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.774119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.774127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.774424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.774433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.774717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.774726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.774996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.775004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.775270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.775279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.775546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.775555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.775844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.775853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.776121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.776129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.776404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.776412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.776634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.776642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.776996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.777006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.777291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.777299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.777588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.777597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.777864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.777873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.778171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.778179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.778517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.778526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.778809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.778818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.779102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.779111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.779366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.181 [2024-10-01 16:54:55.779374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.181 qpair failed and we were unable to recover it. 00:30:04.181 [2024-10-01 16:54:55.779711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.779720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.780001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.780010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.780330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.780338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.780632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.780642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.780928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.780936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.781218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.781227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.781498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.781506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.781795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.781804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.782048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.782057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.782326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.782334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.782615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.782623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.782910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.782919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.783196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.783204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.783498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.783507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.783774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.783782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.784058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.784067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.784240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.784249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.784611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.784619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.784927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.784936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.785225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.785233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.785495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.785505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.785797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.785806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.785995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.786003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.786281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.786289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.786597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.786605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.786888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.786897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.787217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.787226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.787517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.787526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.787699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.787709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.787985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.787995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.788289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.788298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.788568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.788576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.788836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.788844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.789131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.789140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.789417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.789426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.789712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.789721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.790001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.790009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.790322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.790331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.182 [2024-10-01 16:54:55.790597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.182 [2024-10-01 16:54:55.790605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.182 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.790894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.790904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.791193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.791202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.791493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.791502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.791769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.791777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.792131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.792141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.792494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.792503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.792786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.792795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.793137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.793146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.793430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.793440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.793727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.793735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.793930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.793939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.794206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.794215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.794520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.794528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.794840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.794848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.795153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.795162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.795415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.795423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.795712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.795722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.796031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.796040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.796340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.796349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.796616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.796624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.796916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.796924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.797208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.797218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.797516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.797526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.797863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.797873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.798151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.798161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.798449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.798458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.798769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.798778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.798971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.798981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.799279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.799288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.799597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.799606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.799914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.799922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.800212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.800222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.800512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.800520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.800790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.183 [2024-10-01 16:54:55.800799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.183 qpair failed and we were unable to recover it. 00:30:04.183 [2024-10-01 16:54:55.801077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.801085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.801363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.801371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.801661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.801669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.801937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.801946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.802240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.802249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.802444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.802452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.802599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.802608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.802931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.802940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.803240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.803249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.803516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.803525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.803701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.803710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.803965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.803977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.804135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.804143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.804460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.804469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.804731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.804739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.805037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.805046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.805309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.805317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.805617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.805625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.805915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.805924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.806073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.806081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.806270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.806279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.806549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.806557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.806808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.806816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.807091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.807099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.807382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.807392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.807682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.807691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.807965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.807986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.808328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.808339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.808494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.808502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.808795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.808804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.809015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.809023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.809243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.809251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.809530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.809539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.809799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.809807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.810089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.810098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.810377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.810385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.810621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.810629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.184 [2024-10-01 16:54:55.810904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.184 [2024-10-01 16:54:55.810912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.184 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.811211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.811220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.811488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.811496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.811760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.811769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.812078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.812088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.812417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.812427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.812731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.812740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.813024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.813033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.813329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.813337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.813663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.813672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.813828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.813836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.814135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.814145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.814425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.814434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.814618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.814626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.814924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.814932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.815260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.815270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.815577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.815585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.815843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.815851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.816134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.816143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.816438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.816448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.816702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.816711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.816997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.817006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.817289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.817298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.817435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.817443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.817729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.817737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.817938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.817946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.818231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.818240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.818419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.818428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.818734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.818743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.819034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.819043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.819776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.819797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.820090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.820099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.820286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.820294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.820577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.820585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.820756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.820765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.821048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.821056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.821339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.821347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.821601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.821611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.821877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.185 [2024-10-01 16:54:55.821887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.185 qpair failed and we were unable to recover it. 00:30:04.185 [2024-10-01 16:54:55.822134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.186 [2024-10-01 16:54:55.822142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.186 qpair failed and we were unable to recover it. 00:30:04.186 [2024-10-01 16:54:55.822425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.186 [2024-10-01 16:54:55.822433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.186 qpair failed and we were unable to recover it. 00:30:04.186 [2024-10-01 16:54:55.822696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.186 [2024-10-01 16:54:55.822705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.186 qpair failed and we were unable to recover it. 00:30:04.186 [2024-10-01 16:54:55.823024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.186 [2024-10-01 16:54:55.823032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.186 qpair failed and we were unable to recover it. 00:30:04.186 [2024-10-01 16:54:55.823301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.186 [2024-10-01 16:54:55.823309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.186 qpair failed and we were unable to recover it. 00:30:04.461 [2024-10-01 16:54:55.823616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.461 [2024-10-01 16:54:55.823626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.461 qpair failed and we were unable to recover it. 00:30:04.461 [2024-10-01 16:54:55.823882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.823891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.824171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.824179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.824470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.824478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.824743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.824752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.825042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.825050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.825365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.825373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.825663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.825672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.825948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.825957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.826248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.826257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.826524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.826533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.826692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.826702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.826952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.826961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.827691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.827709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.828040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.828051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.828355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.828363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.828539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.828548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.828709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.828718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.829008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.829017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.829282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.829291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.829591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.829599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.829897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.829906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.830198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.830206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.830489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.830498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.830814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.830822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.831108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.831116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.831382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.831393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.832082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.832101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.832395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.832406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.832694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.832703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.832886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.832894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.833161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.833171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.833456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.833464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.833654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.833662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.833940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.833950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.834277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.834286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.834552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.834561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.834820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.462 [2024-10-01 16:54:55.834829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.462 qpair failed and we were unable to recover it. 00:30:04.462 [2024-10-01 16:54:55.835131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.835140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.835432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.835440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.835756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.835764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.835955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.835965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.836266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.836275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.836437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.836446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.836749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.836757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.837024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.837033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.837358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.837367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.837546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.837553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.837872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.837881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.838105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.838113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.838464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.838473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.838768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.838778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.839069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.839078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.839377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.839387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.839659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.839668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.839929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.839939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.840253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.840263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.840584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.840593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.840864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.840873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.841145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.841155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.841431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.841440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.841709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.841717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.841992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.842001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.842278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.842287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.842482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.842491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.842821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.842829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.843175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.843186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.843347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.843356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.843637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.843647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.843840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.843849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.844131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.844141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.844454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.844462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.844660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.844668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.844921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.844931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.845109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.845120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.463 [2024-10-01 16:54:55.845395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.463 [2024-10-01 16:54:55.845403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.463 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.845689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.845707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.845940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.845949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.846246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.846255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.846541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.846550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.846735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.846745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.846917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.846926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.847217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.847227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.847422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.847432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.847695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.847704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.847992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.848003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.848301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.848309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.848453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.848461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.848622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.848631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.848912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.848920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.849287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.849297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.849594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.849602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.849775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.849784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.850071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.850080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.850240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.850248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.850465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.850473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.850637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.850645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.850986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.850995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.851260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.851268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.851513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.851522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.851789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.851798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.852094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.852104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.852405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.852414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.852711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.852720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.853037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.853046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.853337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.853345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.853626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.853636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.853763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.853771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.854052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.854062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.854315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.854324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.854641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.854650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.854880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.854888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.855164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.855173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.855331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.464 [2024-10-01 16:54:55.855340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.464 qpair failed and we were unable to recover it. 00:30:04.464 [2024-10-01 16:54:55.855616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.855624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.855830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.855838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.856134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.856143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.856403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.856410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.856726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.856734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.856783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.856790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.857074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.857083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.857384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.857392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.857707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.857717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.857996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.858006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.858193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.858202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.858489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.858498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.858678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.858686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.858881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.858889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.859224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.859233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.859526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.859535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.859828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.859837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.859998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.860007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.860300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.860310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.860592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.860600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.860841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.860849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.861120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.861128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.861436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.861444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.861704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.861713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.862001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.862010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.862313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.862321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.862613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.862622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.862923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.862931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.863125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.863133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.863231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.863239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.863510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.863520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.863809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.863818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.864116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.864127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.864293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.864301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.864451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.864459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.864738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.864746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.865044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.465 [2024-10-01 16:54:55.865053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.465 qpair failed and we were unable to recover it. 00:30:04.465 [2024-10-01 16:54:55.865349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.865357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.865645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.865653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.865837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.865846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.866018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.866028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.866358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.866366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.866562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.866570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.866709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.866717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.866962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.866975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.867269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.867278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.867576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.867584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.867889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.867898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.868182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.868190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.868459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.868467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.868749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.868757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.869029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.869037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.869371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.869379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.869565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.869573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.869745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.869755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.870019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.870028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.870313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.870321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.870499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.870507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.870674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.870682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.870872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.870880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.871094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.871103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.871388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.871396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.871578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.871586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.871860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.871869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.872025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.872033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.872324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.872332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.872582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.872589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.872885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.872894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.873179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.466 [2024-10-01 16:54:55.873189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.466 qpair failed and we were unable to recover it. 00:30:04.466 [2024-10-01 16:54:55.873372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.873381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.873542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.873550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.873833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.873842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.873988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.873998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.874179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.874187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.874444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.874454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.874704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.874713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.875030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.875038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.875325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.875334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.875632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.875641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.875938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.875946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.876238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.876247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.876410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.876419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.876713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.876722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.877025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.877034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.877317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.877326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.877525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.877534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.877700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.877707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.877889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.877897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.878188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.878197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.878465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.878474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.878774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.878782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.879078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.879094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.879382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.879391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.879709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.879717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.880011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.880021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.880306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.880315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.880597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.880606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.880874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.880883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.881148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.881156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.881419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.881429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.881616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.881625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.881916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.881925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.882217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.882227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.882494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.882503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.882792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.882804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.883013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.883021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.883289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.467 [2024-10-01 16:54:55.883297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.467 qpair failed and we were unable to recover it. 00:30:04.467 [2024-10-01 16:54:55.883568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.883576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.883863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.883872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.884054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.884063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.884366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.884375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.884681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.884691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.884858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.884869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.885168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.885177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.885457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.885467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.885741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.885750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.886028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.886037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.886221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.886231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.886498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.886508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.886759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.886768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.887069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.887079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.887361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.887370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.887650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.887663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.887959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.887972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.888234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.888243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.888507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.888516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.888733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.888742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.889021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.889029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.889372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.889382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.889643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.889652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.889928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.889936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.890116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.890125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.890375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.890384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.890664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.890672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.890935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.890944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.891266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.891276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.891582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.891592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.891919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.891928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.892207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.892216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.892519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.892535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.892772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.892780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.893054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.893064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.893358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.893367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.893627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.893635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.893924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.893932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.468 qpair failed and we were unable to recover it. 00:30:04.468 [2024-10-01 16:54:55.894291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.468 [2024-10-01 16:54:55.894299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.894591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.894600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.894907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.894915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.895175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.895187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.895492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.895501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.895795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.895805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.895966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.895985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.896262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.896272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.896556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.896566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.896856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.896866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.897176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.897189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.897486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.897494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.897768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.897776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.898076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.898084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.898349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.898358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.898654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.898663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.898933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.898942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.899243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.899253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.899508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.899517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.899769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.899777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.900047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.900056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.900353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.900362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.900485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.900495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.900648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.900657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.900947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.900956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.901255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.901265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.901538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.901548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.901803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.901812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.902079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.902088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.902269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.902277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.902538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.902547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.902771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.902781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.903065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.903073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.903366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.903375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.903623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.903632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.903838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.903846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.904119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.904128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.904458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.904467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.469 [2024-10-01 16:54:55.904761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.469 [2024-10-01 16:54:55.904770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.469 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.904945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.904953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.905246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.905255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.905543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.905551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.905743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.905751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.906070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.906080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.906383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.906391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.906669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.906677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.906984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.906993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.907284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.907295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.907557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.907566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.907856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.907865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.908133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.908142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.908411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.908419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.908707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.908716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.909010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.909018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.909323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.909332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.909615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.909623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.909893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.909902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.910179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.910189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.910356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.910365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.910658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.910668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.910961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.910983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.911249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.911258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.911544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.911553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.911805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.911813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.911980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.911989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.912290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.912299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.912584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.912592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.912880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.912889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.913175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.913184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.913309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.913318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.913603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.913612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.913922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.913931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.914224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.914232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.914540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.914549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.914906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.914916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.915195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.915203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.915501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.470 [2024-10-01 16:54:55.915510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.470 qpair failed and we were unable to recover it. 00:30:04.470 [2024-10-01 16:54:55.915777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.915786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.916078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.916086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.916355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.916363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.916650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.916658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.916926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.916934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.917225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.917234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.917554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.917563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.917715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.917725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.918007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.918015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.918329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.918337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.918611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.918619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.918916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.918924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.919301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.919309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.919621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.919631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.919919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.919929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.920190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.920199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.920480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.920489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.920818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.920828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.921127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.921135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.921438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.921448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.921759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.921767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.922064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.922072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.922345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.922354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.922641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.922649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.922831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.922840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.923124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.923132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.923429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.923437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.923629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.923637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.923884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.923894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.924187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.924196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.924469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.924477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.924769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.924777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.925122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.471 [2024-10-01 16:54:55.925130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.471 qpair failed and we were unable to recover it. 00:30:04.471 [2024-10-01 16:54:55.925419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.925428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.925700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.925708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.926000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.926009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.926224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.926233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.926569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.926580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.926856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.926865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.927140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.927148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.927421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.927429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.927588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.927598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.927892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.927902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.928103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.928111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.928461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.928470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.928759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.928768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.929038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.929046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.929318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.929326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.929626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.929635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.929972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.929981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.930282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.930290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.930567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.930575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.930755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.930763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.930956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.930964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.931245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.931253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.931509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.931517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.931704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.931713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.932037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.932046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.932370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.932380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.932666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.932675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.932932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.932942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.933233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.933241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.933514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.933522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.933816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.933824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.934112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.934120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.934399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.934407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.934683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.934691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.934980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.934988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.935275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.935284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.935421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.935431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.935700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.472 [2024-10-01 16:54:55.935709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.472 qpair failed and we were unable to recover it. 00:30:04.472 [2024-10-01 16:54:55.936003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.936013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.936308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.936316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.936437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.936445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.936730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.936739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.937043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.937052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.937338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.937347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.937641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.937651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.937904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.937912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.938268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.938277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.938587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.938596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.938859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.938868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.939169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.939178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.939465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.939474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.939739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.939747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.940030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.940039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.940346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.940355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.940637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.940645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.940916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.940925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.941216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.941226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.941498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.941507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.941791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.941800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.942102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.942112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.942458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.942467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.942774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.942783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.943069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.943078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.943366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.943373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.943667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.943676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.943984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.943994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.944290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.944298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.944571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.944579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.944874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.944882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.945190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.945198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.945464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.945472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.945744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.945752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.946027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.946035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.946312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.946320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.946613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.946621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.946886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.473 [2024-10-01 16:54:55.946894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.473 qpair failed and we were unable to recover it. 00:30:04.473 [2024-10-01 16:54:55.947222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.947231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.947497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.947505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.947792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.947801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.948076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.948084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.948387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.948395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.948664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.948672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.948987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.948997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.949319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.949328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.949525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.949536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.949803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.949811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.950109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.950117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.950446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.950454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.950712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.950720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.950997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.951005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.951314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.951322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.951578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.951586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.951887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.951895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.952173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.952182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.952344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.952354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.952660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.952669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.952869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.952878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.953239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.953249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.953403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.953412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.953712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.953720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.954044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.954053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.954330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.954338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.954599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.954607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.954888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.954896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.955095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.955104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.955412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.955420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.955714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.955721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.956002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.956011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.956275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.956284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.956558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.956566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.956859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.956867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.957160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.957168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.957462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.957470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.957780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.957789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.474 qpair failed and we were unable to recover it. 00:30:04.474 [2024-10-01 16:54:55.958068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.474 [2024-10-01 16:54:55.958077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.958361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.958369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.958671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.958679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.959039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.959047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.959331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.959339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.959608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.959617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.959893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.959901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.960205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.960214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.960503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.960511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.960776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.960785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.961081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.961091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.961414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.961423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.961715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.961723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.961990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.961999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.962313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.962323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.962592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.962601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.962888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.962898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.963184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.963194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.963376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.963384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.963646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.963653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.963954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.963964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.964269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.964279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.964572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.964581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.964852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.964861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.965128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.965138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.965460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.965469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.965763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.965773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.966081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.966091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.966389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.966397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.966710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.966720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.967044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.967052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.475 [2024-10-01 16:54:55.967331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.475 [2024-10-01 16:54:55.967339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.475 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.967634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.967642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.967949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.967958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.968260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.968269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.968617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.968625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.968907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.968915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.969213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.969221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.969536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.969546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.969861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.969870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.970138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.970147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.970428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.970436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.970724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.970733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.971021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.971030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.971228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.971236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.971502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.971510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.971813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.971821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.972111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.972119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.972415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.972423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.972703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.972712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.973006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.973017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.973397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.973405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.973747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.973756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.974056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.974065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.974379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.974387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.974650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.974659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.974935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.974943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.975129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.975137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.975445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.975454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.975762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.975770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.976058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.976067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.976278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.976286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.976526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.976535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.976820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.976828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.977129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.977138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.977405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.977413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.977732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.977741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.978012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.978020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.978309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.978317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.978587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.476 [2024-10-01 16:54:55.978595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.476 qpair failed and we were unable to recover it. 00:30:04.476 [2024-10-01 16:54:55.978814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.978822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.979041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.979049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.979229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.979237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.979460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.979469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.979758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.979767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.980073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.980081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.980372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.980380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.980643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.980652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.980940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.980949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.981141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.981150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.981458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.981467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.981814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.981822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.981952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.981960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.982251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.982260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.982551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.982560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.982896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.982904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.983264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.983273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.983583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.983593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.983881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.983889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.984174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.984182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.984475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.984485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.984760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.984769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.985064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.985073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.985355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.985363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.985650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.985658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.985927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.985935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.986231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.986240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.986558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.986566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.986856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.986864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.987138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.987146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.987467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.987476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.987753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.987761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.988056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.988064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.988234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.988243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.988523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.988532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.988800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.988808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.988964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.988979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.989252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.989260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.477 [2024-10-01 16:54:55.989546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.477 [2024-10-01 16:54:55.989555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.477 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.989716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.989724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.990024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.990033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.990312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.990320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.990611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.990620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.990888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.990896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.991200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.991209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.991475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.991484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.991677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.991685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.991975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.991983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.992249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.992257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.992531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.992540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.992802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.992811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.993119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.993128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.993440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.993449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.993569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.993577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.993848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.993856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.994129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.994137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.994409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.994417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.994732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.994741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.994889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.994899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.995188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.995196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.995466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.995476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.995791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.996018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.996026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.996320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.996329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.996619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.996628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.996895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.996903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.997188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.997196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.997499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.997508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.997783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.997792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.998131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.998140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.998427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.998436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.998703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.998711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.999003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.999012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.999340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.999349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.999535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.999543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:55.999811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:55.999821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:56.000088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.478 [2024-10-01 16:54:56.000097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.478 qpair failed and we were unable to recover it. 00:30:04.478 [2024-10-01 16:54:56.000374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.000382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.000599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.000608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.000876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.000885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.001177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.001186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.001523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.001533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.001810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.001819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.001977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.001987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.002331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.002340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.002647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.002656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.002963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.002976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.003273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.003283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.003579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.003588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.003899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.003908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.004204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.004212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.004480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.004488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.004721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.004729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.005037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.005047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.005304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.005313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.005582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.005590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.005822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.005831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.006115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.006124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.006415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.006423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.006730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.006739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.007026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.007037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.007323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.007332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.007619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.007628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.007892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.007901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.008202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.008211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.008901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.008921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.009124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.009135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.009413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.009422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.009707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.009717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.009979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.009988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.010280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.010289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.010603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.010613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.010901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.010910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.011196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.011204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.011449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.011458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.479 qpair failed and we were unable to recover it. 00:30:04.479 [2024-10-01 16:54:56.011734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.479 [2024-10-01 16:54:56.011741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.012024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.012032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.012303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.012312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.012503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.012511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.012781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.012788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.013083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.013091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.013401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.013411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.013699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.013708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.014487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.014504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.014865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.014874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.015154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.015162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.015346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.015355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.015684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.015693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.016010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.016019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.016310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.016318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.016489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.016498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.016802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.016811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.017098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.017107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.017415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.017424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.017711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.017720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.017994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.018002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.018322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.018330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.018583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.018592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.018894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.018903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.019183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.019191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.019484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.019494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.019759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.019767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.019967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.019981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.020262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.020271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.020532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.020541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.020840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.020850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.021119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.021128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.021400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.021408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.480 [2024-10-01 16:54:56.021697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.480 [2024-10-01 16:54:56.021705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.480 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.022009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.022017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.022317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.022326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.022631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.022640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.022942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.022950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.023226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.023234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.023515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.023523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.023792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.023800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.024062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.024071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.024359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.024367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.024653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.024662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.024930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.024939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.025216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.025226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.025502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.025510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.025802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.025810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.026078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.026086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.026385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.026393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.026674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.026684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.027000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.027008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.027288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.027297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.027572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.027581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.027886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.027895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.028093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.028103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.028384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.028392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.028687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.028696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.029006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.029016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.029327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.029335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.029640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.029649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.029962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.029974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.030146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.030154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.030463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.030471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.030744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.030753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.031034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.031044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.031334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.031342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.031639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.031647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.031918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.031925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.032212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.032220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.032488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.032496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.481 [2024-10-01 16:54:56.032665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.481 [2024-10-01 16:54:56.032674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.481 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.032911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.032919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.033212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.033221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.033489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.033497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.033783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.033793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.034056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.034066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.034349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.034359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.034664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.034672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.034937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.034946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.035262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.035271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.035554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.035563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.035869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.035877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.036158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.036167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.036436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.036444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.036737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.036746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.036963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.036975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.037230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.037238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.037508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.037517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.037816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.037825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.038132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.038142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.038423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.038432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.038735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.038743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.039010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.039019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.039180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.039189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.039444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.039453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.039637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.039647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.039930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.039938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.040212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.040221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.040404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.040412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.040700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.040708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.040991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.041000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.041259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.041267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.041513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.041522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.041824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.041832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.042105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.042115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.042385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.042393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.042686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.042694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.042967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.042978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.482 [2024-10-01 16:54:56.043271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.482 [2024-10-01 16:54:56.043279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.482 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.043559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.043567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.043856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.043864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.044134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.044143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.044443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.044451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.044724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.044733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.045019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.045028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.045314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.045322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.045478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.045487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.045693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.045702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.045997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.046006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.046273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.046282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.046563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.046581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.046889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.046898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.047187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.047196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.047507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.047516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.047777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.047786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.048091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.048100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.048390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.048400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.048668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.048676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.048972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.048982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.049271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.049280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.049559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.049568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.049817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.049826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.050123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.050131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.050444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.050453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.050670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.050680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.050942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.050951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.051232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.051241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.051529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.051538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.051736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.051745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.052036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.052045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.052338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.052346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.052605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.052613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.052894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.052902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.053101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.053109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.053295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.053305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.053509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.053526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.483 [2024-10-01 16:54:56.053846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.483 [2024-10-01 16:54:56.053856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.483 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.054132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.054141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.054435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.054444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.054713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.054723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.054979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.054988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.055257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.055266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.055527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.055535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.055822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.055830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.056119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.056128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.056395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.056403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.056706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.056714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.056992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.057000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.057319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.057328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.057636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.057645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.057938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.057947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.058254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.058264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.058542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.058550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.058817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.058825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.059140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.059148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.059423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.059431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.059625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.059634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.059913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.059922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.060249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.060257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.060562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.060572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.060864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.060874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.061163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.061173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.061461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.061470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.061765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.061774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.062032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.062041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.062332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.062340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.062631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.062640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.062813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.062821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.063132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.063141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.063448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.484 [2024-10-01 16:54:56.063458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.484 qpair failed and we were unable to recover it. 00:30:04.484 [2024-10-01 16:54:56.063745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.063753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.063978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.063987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.064130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.064139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.064413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.064421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.064721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.064731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.065039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.065048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.065323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.065331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.065596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.065605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.065888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.065897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.066210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.066219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.066441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.066450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.066728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.066737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.067017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.067026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.067302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.067310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.067587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.067595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.067898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.067906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.068203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.068212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.068516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.068524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.068817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.068825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.069087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.069095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.069392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.069402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.069571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.069580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.069890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.069898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.070170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.070178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.070476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.070484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.070755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.070764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.071071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.071080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2865019 Killed "${NVMF_APP[@]}" "$@" 00:30:04.485 [2024-10-01 16:54:56.071381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.071390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.071681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.071690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:04.485 [2024-10-01 16:54:56.071974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.071983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.072278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.072289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:04.485 [2024-10-01 16:54:56.072573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.072583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:04.485 [2024-10-01 16:54:56.072858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.072867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.485 [2024-10-01 16:54:56.073141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.485 [2024-10-01 16:54:56.073151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.073442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.485 [2024-10-01 16:54:56.073452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.485 qpair failed and we were unable to recover it. 00:30:04.485 [2024-10-01 16:54:56.073688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.073698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.073981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.073991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.074302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.074310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.074598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.074606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.074781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.074790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.075071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.075079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.075379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.075388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.075529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.075537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.075715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.075724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.075991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.076001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.076256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.076265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.076579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.076589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.076841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.076850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.077047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.077056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.077351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.077358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.077559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.077567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.077739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.077747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.078007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.078017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.078312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.078320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.078581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.078589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.078861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.078873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.079158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.079167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.079335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.079345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.079672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.079681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.079949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.079959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.080128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.080138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.080404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.080414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 [2024-10-01 16:54:56.080584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.080593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2865858 00:30:04.486 [2024-10-01 16:54:56.080878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.080889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2865858 00:30:04.486 [2024-10-01 16:54:56.081160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.081170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2865858 ']' 00:30:04.486 [2024-10-01 16:54:56.081357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.081366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.486 [2024-10-01 16:54:56.081693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.081703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:04.486 [2024-10-01 16:54:56.081890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.081900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.486 [2024-10-01 16:54:56.082190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.486 [2024-10-01 16:54:56.082199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.486 qpair failed and we were unable to recover it. 00:30:04.486 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:04.487 [2024-10-01 16:54:56.082403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.082411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.487 [2024-10-01 16:54:56.082688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.082696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.083025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.083035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.083260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.083269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.083561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.083569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.083853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.083861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.084047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.084056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.084373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.084380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.084629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.084638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.084848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.084857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.085031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.085056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.085424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.085433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.085667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.085676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.085879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.085888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.086232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.086241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.086506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.086515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.086819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.086828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.087030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.087040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.087372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.087383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.087680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.087689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.087986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.087996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.088182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.088191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.088440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.088449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.088750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.088759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.089053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.089063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.089412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.089422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.089671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.089681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.089986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.089996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.090255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.090264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.090419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.090429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.090641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.090650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.090867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.090875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.091170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.091180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.091445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.091455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.091614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.091624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.091859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.091869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.092195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.092205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.487 [2024-10-01 16:54:56.092488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.487 [2024-10-01 16:54:56.092498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.487 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.092785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.092795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.093085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.093095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.093286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.093296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.093560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.093570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.093757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.093766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.094105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.094115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.094429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.094438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.094754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.094763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.094988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.094998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.095285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.095294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.095592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.095602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.095876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.095884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.096039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.096050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.096357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.096366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.096554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.096563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.096840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.096849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.097112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.097121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.097274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.097284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.097532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.097540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.097817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.097827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.098121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.098132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.098398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.098408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.098695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.098703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.098978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.098988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.099261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.099271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.099565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.099574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.099758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.099768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.099921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.099930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.100212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.100222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.100493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.100502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.100778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.100787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.100837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.100845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.101120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.101129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.101282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.101292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.101451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.101460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.101615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.101624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.101918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.488 [2024-10-01 16:54:56.101928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.488 qpair failed and we were unable to recover it. 00:30:04.488 [2024-10-01 16:54:56.102229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.102239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.102474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.102484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.102781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.102790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.102962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.102975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.103054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.103062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.103340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.103349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.103669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.103679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.103932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.103941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.104138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.104148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.104502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.104512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.104790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.104800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.105039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.105048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.105371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.105379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.105539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.105550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.105862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.105871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.106203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.106219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.106495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.106503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.106799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.106808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.107096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.107104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.107287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.107296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.107573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.107582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.107664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.107671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.107941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.107949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.108221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.108229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.108535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.108543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.108693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.108703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.108990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.109000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.109236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.109245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.109425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.109433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.109654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.109663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.109977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.109986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.110347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.110356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.110695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.110705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.489 qpair failed and we were unable to recover it. 00:30:04.489 [2024-10-01 16:54:56.110853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.489 [2024-10-01 16:54:56.110862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.111126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.111134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.111319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.111327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.111587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.111597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.111761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.111770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.111913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.111923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.112115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.112123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.112397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.112405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.112572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.112581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.112870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.112879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.113156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.113164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.113448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.113456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.113621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.113630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.113901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.113909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.114186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.114194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.114484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.114493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.114784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.114792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.115004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.115012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.115300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.115308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.115365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.115372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.115651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.115660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.115804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.115812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.116009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.116017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.116417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.116427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.116701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.116709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.116898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.116906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.117037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.117045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.117326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.117334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.117616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.117624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.117878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.117886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.118158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.118166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.118317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.118325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.118639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.118647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.118982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.118991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.119264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.119273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.119502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.119511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.119805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.119814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.120102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.120111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.490 [2024-10-01 16:54:56.120270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.490 [2024-10-01 16:54:56.120277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.490 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.120547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.120555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.120613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.120620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.120920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.120929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.121219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.121227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.121398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.121407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.121707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.121715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.121861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.121869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.122191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.122200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.122508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.122517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.122796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.122805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.123004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.123012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.123364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.123372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.123668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.123678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.123974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.123983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.124266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.124274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.124466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.124474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.124740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.124747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.124806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.124814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.124980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.124990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.125213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.125221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.125513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.125521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.125884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.125893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.126172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.126181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.126480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.126489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.126780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.126790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.126956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.126966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.127263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.127272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.127570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.127579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.127850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.127858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.128042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.128050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.128336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.128344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.128511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.128519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.491 [2024-10-01 16:54:56.128791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.491 [2024-10-01 16:54:56.128800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.491 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.129077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.129088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.129373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.129381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.129541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.129550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.129839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.129848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.130012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.130020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.130243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.130252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.130418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.130426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.130567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.130575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.130757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.130765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.131031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.131040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.131249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.131257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.131525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.131534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.131862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.131871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.768 [2024-10-01 16:54:56.132066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.768 [2024-10-01 16:54:56.132075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.768 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.132261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.132270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.132568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.132577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.132755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.132764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.133035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.133044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.133333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.133343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.133639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.133649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.133954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.133963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.134273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.134282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.134447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.134456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.134771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.134780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.135069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.135077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.135211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.135218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.135394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.135402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.135584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.135593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.135779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.135789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136371] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:30:04.769 [2024-10-01 16:54:56.136415] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.769 [2024-10-01 16:54:56.136487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.136835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.136843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.137063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.137072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.137199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.137207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.137489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.137497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.137679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.137688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.137752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.137760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.138056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.138069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.138225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.138235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.138503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.138512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.138814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.138824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.769 [2024-10-01 16:54:56.139009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.769 [2024-10-01 16:54:56.139019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.769 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.139316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.139325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.139607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.139616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.139918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.139928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.140211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.140220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.140408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.140417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.140702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.140711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.140877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.140886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.141188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.141197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.141486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.141495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.141672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.141682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.141941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.141951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.142116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.142125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.142427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.142437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.142759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.142768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.142935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.142944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.143260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.143269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.143434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.143444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.143720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.143730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.144015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.144025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.144309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.144318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.144593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.144603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.144784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.144793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.145058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.145068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.145307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.145317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.145487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.145495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.145707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.145716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.145901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.145911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.146207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.146216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.146521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.146531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.146704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.146713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.146942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.146950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.147136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.147145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.147429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.147438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.770 qpair failed and we were unable to recover it. 00:30:04.770 [2024-10-01 16:54:56.147754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.770 [2024-10-01 16:54:56.147763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.148033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.148042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.148334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.148345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.148514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.148524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.148819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.148828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.149097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.149106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.149356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.149365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.149583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.149592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.149797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.149806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.150080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.150088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.150401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.150409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.150716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.150725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.151035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.151043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.151375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.151383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.151701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.151710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.151889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.151898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.152050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.152059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.152368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.152376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.152582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.152592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.152894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.152903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.153204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.153212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.153495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.153503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.153706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.153715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.153885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.153894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.154151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.154159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.154349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.154357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.154632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.154641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.154938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.154948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.155254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.155262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.155568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.155578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.771 qpair failed and we were unable to recover it. 00:30:04.771 [2024-10-01 16:54:56.155763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.771 [2024-10-01 16:54:56.155772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.156044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.156052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.156318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.156325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.156611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.156619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.156892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.156900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.157190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.157199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.157473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.157481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.157780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.157788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.157953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.157962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.158104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.158112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.158288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.158296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.158492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.158500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.158811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.158822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.159121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.159130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.159409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.159416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.159705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.159714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.159846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.159855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.160042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.160050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.160360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.160368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.160515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.160523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.160814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.160824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.161105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.161114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.161407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.161415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.161688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.161696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.161997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.162005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.162157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.162166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.162350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.162358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.162640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.772 [2024-10-01 16:54:56.162648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.772 qpair failed and we were unable to recover it. 00:30:04.772 [2024-10-01 16:54:56.162976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.162984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.163199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.163207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.163362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.163371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.163531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.163540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.163803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.163812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.163980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.163988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.164046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.164053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.164332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.164340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.164493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.164501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.164849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.164857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.165229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.165237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.165510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.165518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.165813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.165822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.166106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.166114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.166268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.166277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.166549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.166558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.166666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.166674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.166972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.166982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.167250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.167259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.167563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.167572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.167825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.167834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.168139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.168148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.168442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.168451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.168622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.168630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.168884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.168892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.169184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.169192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.169474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.169483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.169798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.169807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.170157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.170166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.170509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.170517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.170700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.773 [2024-10-01 16:54:56.170708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.773 qpair failed and we were unable to recover it. 00:30:04.773 [2024-10-01 16:54:56.170900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.170908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.171166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.171175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.171344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.171353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.171540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.171549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.171733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.171741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.172026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.172036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.172324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.172334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.172510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.172519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.172789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.172798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.173065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.173074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.173260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.173269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.173553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.173562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.173765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.173774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.173979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.173989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.174291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.174300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.174463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.174472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.174759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.174768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.175064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.175073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.175343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.175351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.175632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.175641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.175926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.175938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.176271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.176280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.176562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.176572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.176769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.176778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.176944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.176953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.177234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.177243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.177523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.774 [2024-10-01 16:54:56.177533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.774 qpair failed and we were unable to recover it. 00:30:04.774 [2024-10-01 16:54:56.177860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.177870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.178065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.178074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.178267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.178276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.178457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.178466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.178742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.178751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.178945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.178954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.179269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.179279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.179334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.179343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.179541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.179550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.179729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.179739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.180054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.180064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.180360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.180370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.180533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.180542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.180804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.180813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.181117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.181127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.181333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.181343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.181546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.181555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.181841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.181850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.181959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.181971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.182284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.182294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.182600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.182609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.182911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.182920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.183097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.183107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.183299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.183308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.183615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.183624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.183829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.183837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.184144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.184154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.184440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.184450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.775 [2024-10-01 16:54:56.184630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.775 [2024-10-01 16:54:56.184639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.775 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.184822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.184831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.185134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.185143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.185391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.185400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.185694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.185703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.185981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.185992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.186257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.186265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.186587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.186596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.186887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.186895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.187068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.187076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.187357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.187365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.187635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.187643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.188008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.188017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.188339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.188349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.188657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.188666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.188952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.188961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.189254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.189262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.189546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.189554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.189858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.189866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.190073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.190081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.190371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.190379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.190565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.190573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.190833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.190841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.191181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.191189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.191488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.191496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.191661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.191670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.191976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.191986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.192270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.192279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.776 [2024-10-01 16:54:56.192568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.776 [2024-10-01 16:54:56.192578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.776 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.192921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.192930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.193114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.193122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.193404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.193413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.193753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.193762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.193919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.777 [2024-10-01 16:54:56.193942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.193950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.194241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.194249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.194525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.194534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.194815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.194824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.195098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.195108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.195370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.195378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.195639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.195648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.195945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.195954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.196268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.196278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.196470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.196479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.196783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.196793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.197094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.197104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.197433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.197442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.197783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.197793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.198097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.198106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.198266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.198275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.198322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.198329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.198625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.198634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.198911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.198920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.199318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.199327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.199639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.199648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.199927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.199936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.200239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.200249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.200538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.777 [2024-10-01 16:54:56.200547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.777 qpair failed and we were unable to recover it. 00:30:04.777 [2024-10-01 16:54:56.200867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.200877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.201150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.201162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.201464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.201474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.201756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.201765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.201824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.201831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.202103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.202112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.202405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.202413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.202758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.202767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.202953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.202961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.203243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.203252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.203549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.203558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.203749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.203758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.204045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.204054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.204360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.204368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.204699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.204708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.205012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.205021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.205313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.205322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.205626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.205636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.205809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.205817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.206103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.206112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.206433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.206442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.206741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.206750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.206918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.206927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.207213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.207223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.207513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.207522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.207867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.207876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.208042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.208050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.208331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.208340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.208641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.208650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.208948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.778 [2024-10-01 16:54:56.208958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.778 qpair failed and we were unable to recover it. 00:30:04.778 [2024-10-01 16:54:56.209278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.209288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.209573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.209582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.209905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.209914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.210230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.210239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.210536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.210544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.210812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.210820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.211101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.211110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.211318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.211326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.211603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.211612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.211920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.211929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.212220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.212229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.212561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.212572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.212870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.212879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.213062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.213072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.213243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.213252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.213430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.213440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.213723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.213733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.214039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.214048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.214328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.214335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.214629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.214638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.214921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.214931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.215227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.215236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.215465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.215473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.215779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.215788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.216067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.216076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.216223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.216232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.216515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.216524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.779 qpair failed and we were unable to recover it. 00:30:04.779 [2024-10-01 16:54:56.216838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.779 [2024-10-01 16:54:56.216848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.217147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.217155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.217450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.217459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.217732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.217741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.218033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.218041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.218405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.218414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.218702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.218711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.218863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.218873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.219171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.219180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.219470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.219480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.219647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.219655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.219957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.219966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.220280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.220290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.220568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.220576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.220731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.220741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.220898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.220907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.221165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.221173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.221334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.221342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.221635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.221644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.221806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.221815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.222013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.222022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.222292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.222300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.222464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.222473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.222645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.222654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.222838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.222848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.223115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.223124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.223439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.223448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.780 [2024-10-01 16:54:56.223749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.780 [2024-10-01 16:54:56.223757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.780 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.224077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.224087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.224286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.224294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.224596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.224605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.224773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.224780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.225082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.225091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.225418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.225426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.225694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.225702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.225994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.226003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.226262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.226271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.226416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.226424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.226734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.226742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.227044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.227052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.227353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.227362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.227634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.227642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.227807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.227817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.228100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.228108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.228431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.228439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.228792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.228802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.229087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.229096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.229463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.229471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.229642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.229650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.229748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.229756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.230017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.230027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.230336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.230344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.230655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.230665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.230983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.230992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.231148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.231156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.231464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.781 [2024-10-01 16:54:56.231473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.781 qpair failed and we were unable to recover it. 00:30:04.781 [2024-10-01 16:54:56.231885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.231894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.232180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.232188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.232487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.232496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.232704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.232712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.232809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.232818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.233069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.233078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.233380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.233388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.233552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.233561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.233657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.233667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.233966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.233978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.234264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.234273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.234638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.234646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.235000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.235009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.235287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.235296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.235504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.235513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.235784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.235793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.236064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.236074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.236332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.236342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.236649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.236659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.236931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.236940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.237234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.237243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.237520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.237529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.237802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.237812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.238095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.238105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.238404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.238414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.238597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.238606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.238891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.238900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.239202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.239212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.239486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.239495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.782 [2024-10-01 16:54:56.239766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.782 [2024-10-01 16:54:56.239775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.782 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.240049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.240059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.240274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.240283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.240488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.240497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.240693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.240703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.240966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.240978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.241172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.241181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.241460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.241469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.241767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.241776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.241900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.241910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.242206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.242215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.242481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.242491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.242695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.242705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.242991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.243001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.243295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.243304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.243594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.243604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.243896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.243906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.244211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.244220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.244527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.244537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.244721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.244732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.244977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.244987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.245272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.245282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.245537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.245546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.245868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.245877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.246172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.246182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.246471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.246481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.246783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.246792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.247078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.247087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.247388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.247398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.247685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.247695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.783 qpair failed and we were unable to recover it. 00:30:04.783 [2024-10-01 16:54:56.247994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.783 [2024-10-01 16:54:56.248004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.248316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.248325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.248542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.248552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.248874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.248884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.249165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.249157] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.784 [2024-10-01 16:54:56.249174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.249181] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.784 [2024-10-01 16:54:56.249187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.784 [2024-10-01 16:54:56.249192] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.784 [2024-10-01 16:54:56.249196] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.784 [2024-10-01 16:54:56.249482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.249490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 [2024-10-01 16:54:56.249378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.249534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:30:04.784 [2024-10-01 16:54:56.249655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.784 [2024-10-01 16:54:56.249658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.249665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.249760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.249768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.249657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:30:04.784 [2024-10-01 16:54:56.249910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.249919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.250160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.250169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.250464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.250473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.250793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.250801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.251004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.251013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.251283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.251293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.251347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.251355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.251594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.251603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.251878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.251888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.252097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.252106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.252155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.252163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.252482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.252491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.252809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.252819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.253007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.253016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.253313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.253321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.253480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.253488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.253783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.253792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.254027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.254036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.254324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.784 [2024-10-01 16:54:56.254334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.784 qpair failed and we were unable to recover it. 00:30:04.784 [2024-10-01 16:54:56.254482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.254491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.254792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.254800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.255007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.255015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.255339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.255348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.255654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.255662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.255974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.255983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.256267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.256275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.256553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.256561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.256747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.256755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.257042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.257051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.257359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.257368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.257557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.257565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.257788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.257796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.257965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.257977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.258271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.258279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.258553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.258562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.258826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.258834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.259097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.259106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.259426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.259436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.259770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.259780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.260085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.260094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.260288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.260296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.260568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.260578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.260851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.260859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.261164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.261173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.261346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.261355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.261536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.261544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.785 [2024-10-01 16:54:56.261743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.785 [2024-10-01 16:54:56.261753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.785 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.262072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.262081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.262359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.262369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.262530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.262540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.262689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.262699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.262861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.262870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.263174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.263184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.263364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.263374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.263727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.263736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.263919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.263927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.264061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.264069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.264248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.264258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.264583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.264594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.264760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.264769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.265099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.265108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.265281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.265289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.265470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.265478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.265797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.265805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.265973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.265981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.266142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.266150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.266427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.266436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.266737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.266746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.267071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.267080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.267373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.267381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.267660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.267669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.267834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.267844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.268162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.268171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.268471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.268479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.268798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.268807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.786 qpair failed and we were unable to recover it. 00:30:04.786 [2024-10-01 16:54:56.268975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.786 [2024-10-01 16:54:56.268984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.269170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.269178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.269493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.269502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.269661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.269669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.269935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.269945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.270110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.270118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.270364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.270372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.270667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.270675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.270984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.270993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.271298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.271307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.271475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.271482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.271655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.271663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.271949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.271957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.272272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.272281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.272575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.272584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.272773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.272782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.273087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.273097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.273441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.273449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.273632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.273640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.273797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.273806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.273982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.273991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.274375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.274383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.274553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.274562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.274857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.274868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.275059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.275068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.275249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.275257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.275548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.275556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.275691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.275698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.276036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.276044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.276098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.276106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.276493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.276501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.276800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.787 [2024-10-01 16:54:56.276809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.787 qpair failed and we were unable to recover it. 00:30:04.787 [2024-10-01 16:54:56.276988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.276996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.277297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.277305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.277604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.277612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.277903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.277911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.278247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.278256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.278558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.278566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.278847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.278855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.279127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.279135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.279413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.279421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.279599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.279607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.279907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.279915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.280090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.280098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.280261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.280270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.280563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.280571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.280757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.280764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.281049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.281058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.281339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.281347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.281510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.281517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.281802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.281812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.282128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.282137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.282470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.282480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.282791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.282800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.283091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.283100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.283396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.283405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.283687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.283695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.283992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.284001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.284183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.284192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.284294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.284302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.284602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.284610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.284772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.788 [2024-10-01 16:54:56.284780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.788 qpair failed and we were unable to recover it. 00:30:04.788 [2024-10-01 16:54:56.285050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.285059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.285248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.285259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.285411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.285420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.285722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.285730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.285884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.285892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.286097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.286105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.286302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.286310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.286605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.286614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.286887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.286896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.287173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.287181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.287223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.287231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.287495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.287503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.287812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.287820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.288211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.288220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.288523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.288532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.288702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.288711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.288855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.288863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.289011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.289019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.289324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.289332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.289644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.289653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.289842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.289850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.290001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.290008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.290170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.290178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.290344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.290353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.290656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.290664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.290936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.290944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.291247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.291255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.291560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.291569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.291849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.291858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.292114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.292122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.292283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.292293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.292620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.292628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.292853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.789 [2024-10-01 16:54:56.292861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.789 qpair failed and we were unable to recover it. 00:30:04.789 [2024-10-01 16:54:56.293053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.293062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.293441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.293449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.293628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.293637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.293915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.293924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.294220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.294229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.294533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.294542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.294833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.294842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.295124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.295133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.295429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.295439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.295749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.295757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.295951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.295960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.296236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.296245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.296402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.296412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.296725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.296735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.296886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.296896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.297084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.297093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.297364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.297374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.297562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.297571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.297865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.297874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.298096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.298105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.298421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.298431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.298750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.298759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.299048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.299057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.299356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.299366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.299641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.299650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.299945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.299954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.300123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.300132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.300483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.300492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.300787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.300796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.301090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.301099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.301264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.301272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.301583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.301592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.301855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.790 [2024-10-01 16:54:56.301865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.790 qpair failed and we were unable to recover it. 00:30:04.790 [2024-10-01 16:54:56.302036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.302045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.302305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.302312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.302611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.302619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.302908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.302918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.303071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.303080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.303350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.303358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.303629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.303637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.303806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.303814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.303979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.303988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.304280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.304289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.304614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.304623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.304960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.304971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.305344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.305352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.305638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.305647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.305979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.305988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.306252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.306262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.306535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.306544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.306840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.306850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.307024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.307033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.307201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.307209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.307484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.307493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.307648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.307656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.307927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.307935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.308233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.308241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.308540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.308549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.308840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.308850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.791 [2024-10-01 16:54:56.309130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.791 [2024-10-01 16:54:56.309138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.791 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.309172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.309180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.309444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.309453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.309732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.309741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.310952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.310960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.311132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.311140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.311430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.311438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.311761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.311769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.311947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.311955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.312264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.312272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.312589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.312605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.312886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.312896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.313092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.313101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.313342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.313350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.313659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.313668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.313945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.313954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.314233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.314242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.314487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.314496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.314806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.314815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.315101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.315109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.315276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.315284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.315604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.315612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.315843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.315851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.316136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.316145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.316443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.316454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.316725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.316733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.317024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.317032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.792 qpair failed and we were unable to recover it. 00:30:04.792 [2024-10-01 16:54:56.317334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.792 [2024-10-01 16:54:56.317342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.317503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.317511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.317799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.317808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.318026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.318034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.318327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.318336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.318538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.318547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.318704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.318714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.318845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.318854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.319141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.319150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.319429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.319437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.319729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.319739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.320035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.320044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.320313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.320321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.320605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.320613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.320805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.320814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.321094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.321102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.321281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.321289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.321556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.321565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.321878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.321887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.322147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.322156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.322328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.322336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.322643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.322652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.322974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.322983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.323269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.323277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.323563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.323573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.323784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.323795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.324081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.324090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.324396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.324406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.324725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.324734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.325024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.325033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.325316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.325325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.793 [2024-10-01 16:54:56.325469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.793 [2024-10-01 16:54:56.325478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.793 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.325661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.325670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.325980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.325988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.326099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.326106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.326369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.326378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.326514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.326521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.326782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.326793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.327080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.327089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.327345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.327353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.327627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.327636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.327940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.327949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.328286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.328295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.328632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.328639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.328920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.328928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.329254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.329262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.329576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.329585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.329886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.329895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.330073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.330081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.330354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.330362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.330664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.330672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.330989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.330999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.331162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.331170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.331473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.331481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.331805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.331813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.331986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.331994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.332304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.332312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.332476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.332484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.332853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.332861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.333188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.333198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.794 [2024-10-01 16:54:56.333472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.794 [2024-10-01 16:54:56.333480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.794 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.333753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.333761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.334043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.334052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.334464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.334472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.334817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.334825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.335010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.335018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.335177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.335184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.335497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.335506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.335696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.335704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.335985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.335993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.336407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.336415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.336715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.336724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.337002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.337011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.337318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.337326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.337632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.337641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.337853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.337862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.338049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.338057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.338336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.338347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.338643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.338652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.338802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.338811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.338961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.338968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.339254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.339263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.339421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.339429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.339626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.339634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.339848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.339857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.795 [2024-10-01 16:54:56.340950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.795 [2024-10-01 16:54:56.340958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.795 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.341045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.341052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.341214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.341222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.341396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.341404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.341656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.341665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.341864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.341873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.342035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.342045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.342389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.342398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.342571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.342579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.342639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.342646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.342808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.342816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.796 [2024-10-01 16:54:56.343001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.343010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.343255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.343264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:04.796 [2024-10-01 16:54:56.343570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.343581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:04.796 [2024-10-01 16:54:56.343780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.343790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.796 [2024-10-01 16:54:56.344084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.344093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.796 [2024-10-01 16:54:56.344383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.344393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.344780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.344791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.344857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.344864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.345184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.345192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.345474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.345482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.345586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.345592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.345874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.345882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.346136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.346143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.346376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.346383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.346594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.346603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.346767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.346776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.796 qpair failed and we were unable to recover it. 00:30:04.796 [2024-10-01 16:54:56.347005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.796 [2024-10-01 16:54:56.347014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.347327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.347335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.347663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.347670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.348025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.348033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.348329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.348338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.348495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.348502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.348850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.348858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.349149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.349157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.349293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.349300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.349576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.349583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.349770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.349777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.349963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.349974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.350257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.350265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.350453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.350460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.350512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.350519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.350699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.350707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.351079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.351087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.351396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.351403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.351565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.351572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.351897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.351904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.352256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.352264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.352428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.352435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.352730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.352738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.353038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.353046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.353267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.353274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.353609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.353617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.353949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.353957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.797 [2024-10-01 16:54:56.354128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.797 [2024-10-01 16:54:56.354135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.797 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.354281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.354289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.354457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.354464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.354662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.354670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.354993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.355000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.355291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.355298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.355520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.355527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.355856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.355863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.356845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.356852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.357164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.357172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.357510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.357517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.357815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.357823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.358130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.358138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.358294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.358301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.358506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.358513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.358686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.358695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.358845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.358852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.359072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.359080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.359359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.359367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.359526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.359532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.359761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.359768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.359946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.359953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.360295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.360302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.360587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.360595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.360921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.360928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.361289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.361296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.361616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.798 [2024-10-01 16:54:56.361623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.798 qpair failed and we were unable to recover it. 00:30:04.798 [2024-10-01 16:54:56.361804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.361811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.362123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.362131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.362322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.362329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.362695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.362703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.362865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.362872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.363037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.363045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.363225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.363232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.363548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.363557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.363848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.363857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.364166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.364174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.364384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.364390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.364687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.364697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.364882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.364889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.365183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.365191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.365362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.365369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.365650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.365657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.365964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.365975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.366326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.366333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.366527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.366534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.366730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.366739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.366995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.367002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.367383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.367390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.367571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.367578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.367782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.367789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.367823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.367830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.368126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.368133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.368423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.368431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.368589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.368596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.368644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.368651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.368803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.799 [2024-10-01 16:54:56.368811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.799 qpair failed and we were unable to recover it. 00:30:04.799 [2024-10-01 16:54:56.369118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.369126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.369205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.369211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.369515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.369522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.369883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.369890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.370131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.370139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.370448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.370455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.370628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.370635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.370928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.370935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.371208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.371215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.371534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.371542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.371877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.371885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.371941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.371948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.372267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.372274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.372573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.372580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.372898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.372905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.373219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.373228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.373609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.373616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.373880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.373887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.374037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.374045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.374294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.374301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.374570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.374578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.374907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.374914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.375174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.375182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.375542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.375550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.375846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.375854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.376193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.376202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.376379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.376386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.376453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.376460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.376659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.376667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.376915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.376924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.800 qpair failed and we were unable to recover it. 00:30:04.800 [2024-10-01 16:54:56.377153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.800 [2024-10-01 16:54:56.377160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.377324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.377331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.377641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.377649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.377931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.377939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.378174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.378182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.378518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.378525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.378724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.378732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.378992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.379000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.379143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.379151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.379420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.379427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.379762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.379769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.379967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.379978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.380909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.380915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.381169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.381176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.381352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.381359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.381715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.381722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.382010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.382018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.382408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.382414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.382776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.382783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 [2024-10-01 16:54:56.382985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.382994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.801 [2024-10-01 16:54:56.383315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.383323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:04.801 [2024-10-01 16:54:56.383607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.383616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.801 qpair failed and we were unable to recover it. 00:30:04.801 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.801 [2024-10-01 16:54:56.383829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.801 [2024-10-01 16:54:56.383838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.802 [2024-10-01 16:54:56.384118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.384127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.384342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.384350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.384652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.384659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.384861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.384868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.385233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.385241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.385545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.385552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.385717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.385724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.386034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.386041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.386343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.386351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.386678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.386685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.386888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.386895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.387193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.387201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.387374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.387381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.387549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.387557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.387912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.387918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.388330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.388337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.388374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.388380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.388676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.388683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.389021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.389029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.389063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.389069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.389201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.389209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.389488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.389494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.389824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.802 [2024-10-01 16:54:56.389831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.802 qpair failed and we were unable to recover it. 00:30:04.802 [2024-10-01 16:54:56.390128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.390135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.390396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.390404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.390744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.390754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.391083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.391092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.391272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.391281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.391563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.391572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.391877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.391886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.392156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.392166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.392485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.392494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.392660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.392669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.392838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.392846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.393130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.393140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.393447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.393455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.393828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.393840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.394009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.394017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.394429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.394437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.394730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.394738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.394907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.394915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.395090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.395098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.395258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.395266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.395563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.395571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.395772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.395780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.396061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.396071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.396384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.396392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.396682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.396690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.803 [2024-10-01 16:54:56.397774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.803 [2024-10-01 16:54:56.397782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.803 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.397994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.398314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.398658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.398832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.398904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.398982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.398989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.399162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.399170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 Malloc0 00:30:04.804 [2024-10-01 16:54:56.399528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.399537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.399721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.399730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.400063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.400072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.804 [2024-10-01 16:54:56.400376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.400384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:04.804 [2024-10-01 16:54:56.400571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.400579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.400746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.400754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.804 [2024-10-01 16:54:56.401051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.401060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.804 [2024-10-01 16:54:56.401141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.401148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.401377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.401385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.401670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.401678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.401851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.401859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.402039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.402047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.402343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.402352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.402668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.402678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.402996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.403004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.403328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.403336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.403654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.403662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.403830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.403837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.404035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.404043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.404387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.404396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.404570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.404577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.804 [2024-10-01 16:54:56.404878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.804 [2024-10-01 16:54:56.404886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.804 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.405175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.405183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.405345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.405354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.405634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.405642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.405813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.405821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.406175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.406184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.406463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.406470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.406717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.805 [2024-10-01 16:54:56.406774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.406782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.406959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.406967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.407165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.407174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.407353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.407362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.407659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.407667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.407965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.407978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.408153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.408161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.408469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.408477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.408660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.408668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.408926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.408934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.409127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.409136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.409427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.409436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.409720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.409730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.410028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.410037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.410329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.410337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.410644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.410652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.410959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.410968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.411245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.411253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.411528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.411536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.411877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.411886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.412060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.412068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.412352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.412361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.412641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.412651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.412955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.412965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.805 [2024-10-01 16:54:56.413217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.805 [2024-10-01 16:54:56.413226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.805 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.413517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.413526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.413689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.413697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.413874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.413882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.414038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.414047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.414212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.414221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.414451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.414459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.414530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.414539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.414579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.414587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.415076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.415171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.415469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.415505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.415825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.415856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205bdf0 with addr=10.0.0.2, port=4420 00:30:04.806 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.416139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.416148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.806 [2024-10-01 16:54:56.416437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.416448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.806 [2024-10-01 16:54:56.416769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.416779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.806 [2024-10-01 16:54:56.416963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.416976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.417171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.417179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.417531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.417539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.417828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.417836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.418158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.418166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.418487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.418495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.418820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.418828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.419121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.419130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.419409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.419417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.419717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.419724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.420020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.420028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.420300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.420308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.420481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.420489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.420636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.420644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.420910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.420918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.421229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.806 [2024-10-01 16:54:56.421238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.806 qpair failed and we were unable to recover it. 00:30:04.806 [2024-10-01 16:54:56.421411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.421420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.421587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.421595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.421879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.421888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.422162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.422171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.422452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.422461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.422740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.422747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.423060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.423068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.423265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.423273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.423570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.423582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.423910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.423919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.424227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.424235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.424516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.424525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.424836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.424846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.425005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.425014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.425318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.425328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.425617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.425626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.425941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.425950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.426278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.426288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.426574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.426583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.426632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.426640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.426958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.426967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.427098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.427107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.427308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.427317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.427648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.427657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.807 [2024-10-01 16:54:56.427992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.428002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.428176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.428185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:04.807 [2024-10-01 16:54:56.428502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.428511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.807 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.807 [2024-10-01 16:54:56.428812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.428822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.429013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.429022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.429293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.429300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.807 qpair failed and we were unable to recover it. 00:30:04.807 [2024-10-01 16:54:56.429577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.807 [2024-10-01 16:54:56.429586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.429870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.429878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.430079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.430088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.430350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.430358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.430697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.430706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.431029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.431038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.431390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.431398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.431552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.431560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.431708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.431716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.431871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.431880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.432024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.432034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.432216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.432224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.432498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.432506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.432809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.432818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.433076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.433084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.433378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.433387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.433674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.433682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.433977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.433985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:04.808 [2024-10-01 16:54:56.434275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.808 [2024-10-01 16:54:56.434284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:04.808 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.434585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.434596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.434873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.434883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.435121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.435129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.435427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.435436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.435740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.435749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.436040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.436049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.436349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.436357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.436672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.436681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.436993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.437002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.437190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.437199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.437353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.437361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.437513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.437520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.437719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.437727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.438031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.438040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.438374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.438384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.438712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.438721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.438992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.439001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.439318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.439326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.439509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.439517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.071 [2024-10-01 16:54:56.439864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.439874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.440041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.071 [2024-10-01 16:54:56.440050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.071 qpair failed and we were unable to recover it. 00:30:05.071 [2024-10-01 16:54:56.440219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.440226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.072 [2024-10-01 16:54:56.440471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.440481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.072 [2024-10-01 16:54:56.440643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.440653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.072 [2024-10-01 16:54:56.440966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.440979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.441317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.441327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.441587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.441597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.441775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.441784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.442092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.442101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.442373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.442381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.442701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.442711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.443025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.443034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.443208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.443217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.443467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.443475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.443811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.443819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.443980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.443988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.444246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.444255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.444551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.444559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.444866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.444875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.445219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.445228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.445399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.445407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.445655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.445664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.445847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.445855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.446144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.446152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.446434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.446442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.446709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.072 [2024-10-01 16:54:56.446717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fde70000b90 with addr=10.0.0.2, port=4420 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 [2024-10-01 16:54:56.446985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.072 [2024-10-01 16:54:56.457604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.072 [2024-10-01 16:54:56.457711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.072 [2024-10-01 16:54:56.457728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.072 [2024-10-01 16:54:56.457734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.072 [2024-10-01 16:54:56.457740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.072 [2024-10-01 16:54:56.457754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.072 qpair failed and we were unable to recover it. 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.072 16:54:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2865126 00:30:05.072 [2024-10-01 16:54:56.467539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.467599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.467613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.467618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.467623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.467637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.477538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.477589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.477599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.477605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.477610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.477621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.487523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.487569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.487580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.487585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.487590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.487601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.497553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.497602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.497612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.497621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.497626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.497636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.507418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.507467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.507479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.507484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.507489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.507501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.517444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.517494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.517505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.517510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.517515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.517526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.527460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.527506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.527517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.527523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.527528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.527539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.537641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.537689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.537700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.537705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.537710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.537721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.547549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.547593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.547604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.547609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.547614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.547625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.557693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.557733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.557744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.557749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.557754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.557765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.567690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.567734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.567744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.073 [2024-10-01 16:54:56.567750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.073 [2024-10-01 16:54:56.567754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.073 [2024-10-01 16:54:56.567765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.073 qpair failed and we were unable to recover it. 00:30:05.073 [2024-10-01 16:54:56.577744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.073 [2024-10-01 16:54:56.577795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.073 [2024-10-01 16:54:56.577805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.577810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.577815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.577825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.587745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.587787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.587800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.587806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.587811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.587821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.597643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.597687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.597697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.597703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.597708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.597719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.607768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.607814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.607824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.607829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.607834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.607844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.617814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.617856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.617866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.617871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.617876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.617887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.627856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.627900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.627910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.627916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.627921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.627934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.637921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.637966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.637981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.637987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.637991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.638002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.647897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.647941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.647951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.647956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.647961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.647974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.657940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.657995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.658005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.658011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.658016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.658026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.667957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.668002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.668012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.668017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.668022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.668033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.677991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.678033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.678045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.678051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.678056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.074 [2024-10-01 16:54:56.678066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.074 qpair failed and we were unable to recover it. 00:30:05.074 [2024-10-01 16:54:56.688004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.074 [2024-10-01 16:54:56.688048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.074 [2024-10-01 16:54:56.688058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.074 [2024-10-01 16:54:56.688065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.074 [2024-10-01 16:54:56.688069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.688080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.697943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.697998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.698008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.698013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.698018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.698029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.708059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.708101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.708111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.708116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.708121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.708131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.718108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.718150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.718160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.718166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.718173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.718184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.728120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.728170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.728180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.728186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.728191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.728201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.738094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.738141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.738151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.738156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.738161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.738172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.075 [2024-10-01 16:54:56.748195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.075 [2024-10-01 16:54:56.748237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.075 [2024-10-01 16:54:56.748247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.075 [2024-10-01 16:54:56.748252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.075 [2024-10-01 16:54:56.748257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.075 [2024-10-01 16:54:56.748267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.075 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.758253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.758331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.758341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.758346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.758352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.337 [2024-10-01 16:54:56.758363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.337 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.768256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.768302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.768312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.768317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.768322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.337 [2024-10-01 16:54:56.768332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.337 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.778279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.778323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.778333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.778338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.778343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.337 [2024-10-01 16:54:56.778353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.337 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.788300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.788385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.788395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.788401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.788406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.337 [2024-10-01 16:54:56.788417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.337 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.798316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.798367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.798377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.798383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.798387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.337 [2024-10-01 16:54:56.798398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.337 qpair failed and we were unable to recover it. 00:30:05.337 [2024-10-01 16:54:56.808326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.337 [2024-10-01 16:54:56.808370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.337 [2024-10-01 16:54:56.808380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.337 [2024-10-01 16:54:56.808385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.337 [2024-10-01 16:54:56.808393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.808404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.818393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.818438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.818448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.818454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.818458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.818469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.828380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.828424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.828434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.828440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.828444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.828455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.838437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.838477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.838487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.838492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.838498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.838508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.848432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.848473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.848483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.848489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.848493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.848504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.858498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.858578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.858588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.858593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.858598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.858609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.868512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.868554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.868564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.868569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.868574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.868585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.878560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.878646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.878657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.878663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.878667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.878678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.888417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.888459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.888470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.888475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.888480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.888490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.898481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.898533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.898543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.898551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.898556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.898567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.908587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.908629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.908639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.908644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.908650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.908660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.918643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.918685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.918695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.918700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.918705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.918716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.928621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.928665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.338 [2024-10-01 16:54:56.928676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.338 [2024-10-01 16:54:56.928681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.338 [2024-10-01 16:54:56.928686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.338 [2024-10-01 16:54:56.928696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.338 qpair failed and we were unable to recover it. 00:30:05.338 [2024-10-01 16:54:56.938695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.338 [2024-10-01 16:54:56.938744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.938754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.938759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.938764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.938775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.948613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.948654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.948665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.948671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.948676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.948686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.958750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.958793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.958803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.958808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.958813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.958824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.968764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.968815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.968834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.968841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.968846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.968861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.978818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.978869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.978882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.978888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.978893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.978905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.988832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.988877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.988889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.988898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.988903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.988917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:56.998866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:56.998906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:56.998917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:56.998923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:56.998928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:56.998939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.339 [2024-10-01 16:54:57.008872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.339 [2024-10-01 16:54:57.008915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.339 [2024-10-01 16:54:57.008926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.339 [2024-10-01 16:54:57.008931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.339 [2024-10-01 16:54:57.008936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.339 [2024-10-01 16:54:57.008947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.339 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.018927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.018980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.018991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.018997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.019002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.019013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.028945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.028994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.029005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.029011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.029016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.029027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.039029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.039082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.039093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.039099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.039103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.039114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.048986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.049036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.049046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.049052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.049056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.049067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.058924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.058977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.058988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.058993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.058998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.059009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.069052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.069094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.069105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.069110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.069115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.069126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.079021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.079066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.079080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.079085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.079090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.079101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.089094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.089154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.089165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.089170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.089175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.089186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.099154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.099213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.099223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.099228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.099233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.099244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.109058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.601 [2024-10-01 16:54:57.109120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.601 [2024-10-01 16:54:57.109130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.601 [2024-10-01 16:54:57.109136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.601 [2024-10-01 16:54:57.109141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.601 [2024-10-01 16:54:57.109153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.601 qpair failed and we were unable to recover it. 00:30:05.601 [2024-10-01 16:54:57.119195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.119240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.119250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.119256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.119261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.119275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.129134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.129176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.129186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.129191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.129196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.129207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.139151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.139202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.139212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.139218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.139223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.139234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.149163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.149205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.149216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.149222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.149226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.149238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.159306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.159357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.159367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.159373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.159378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.159389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.169308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.169356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.169369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.169375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.169380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.169391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.179371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.179422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.179432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.179438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.179443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.179453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.189390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.189437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.189448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.189453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.189458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.189468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.199411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.199456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.199473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.199479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.199483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.199498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.209414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.209455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.209466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.209472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.209476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.209490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.219465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.219513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.219524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.219529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.602 [2024-10-01 16:54:57.219534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.602 [2024-10-01 16:54:57.219545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.602 qpair failed and we were unable to recover it. 00:30:05.602 [2024-10-01 16:54:57.229593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.602 [2024-10-01 16:54:57.229680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.602 [2024-10-01 16:54:57.229691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.602 [2024-10-01 16:54:57.229696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.229701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.229712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.603 [2024-10-01 16:54:57.239502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.603 [2024-10-01 16:54:57.239548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.603 [2024-10-01 16:54:57.239559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.603 [2024-10-01 16:54:57.239564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.239568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.239579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.603 [2024-10-01 16:54:57.249426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.603 [2024-10-01 16:54:57.249468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.603 [2024-10-01 16:54:57.249478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.603 [2024-10-01 16:54:57.249484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.249488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.249499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.603 [2024-10-01 16:54:57.259696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.603 [2024-10-01 16:54:57.259755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.603 [2024-10-01 16:54:57.259765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.603 [2024-10-01 16:54:57.259770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.259775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.259786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.603 [2024-10-01 16:54:57.269540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.603 [2024-10-01 16:54:57.269583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.603 [2024-10-01 16:54:57.269593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.603 [2024-10-01 16:54:57.269599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.269604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.269615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.603 [2024-10-01 16:54:57.279658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.603 [2024-10-01 16:54:57.279702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.603 [2024-10-01 16:54:57.279712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.603 [2024-10-01 16:54:57.279718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.603 [2024-10-01 16:54:57.279723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.603 [2024-10-01 16:54:57.279733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.603 qpair failed and we were unable to recover it. 00:30:05.864 [2024-10-01 16:54:57.289547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.864 [2024-10-01 16:54:57.289588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.864 [2024-10-01 16:54:57.289599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.864 [2024-10-01 16:54:57.289604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.864 [2024-10-01 16:54:57.289609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.864 [2024-10-01 16:54:57.289619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-10-01 16:54:57.299676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.864 [2024-10-01 16:54:57.299722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.864 [2024-10-01 16:54:57.299733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.864 [2024-10-01 16:54:57.299738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.864 [2024-10-01 16:54:57.299746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.864 [2024-10-01 16:54:57.299757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-10-01 16:54:57.309600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.864 [2024-10-01 16:54:57.309642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.864 [2024-10-01 16:54:57.309653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.864 [2024-10-01 16:54:57.309659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.864 [2024-10-01 16:54:57.309664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.864 [2024-10-01 16:54:57.309675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-10-01 16:54:57.319752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.864 [2024-10-01 16:54:57.319794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.864 [2024-10-01 16:54:57.319804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.864 [2024-10-01 16:54:57.319810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.864 [2024-10-01 16:54:57.319815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.864 [2024-10-01 16:54:57.319826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.864 qpair failed and we were unable to recover it. 00:30:05.864 [2024-10-01 16:54:57.329801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.864 [2024-10-01 16:54:57.329848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.864 [2024-10-01 16:54:57.329867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.864 [2024-10-01 16:54:57.329874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.864 [2024-10-01 16:54:57.329880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.329894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.339805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.339850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.339862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.339868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.339873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.339884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.349832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.349878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.349889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.349894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.349899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.349910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.359844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.359886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.359897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.359903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.359908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.359919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.369863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.369912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.369922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.369928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.369933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.369943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.379923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.380007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.380018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.380024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.380029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.380040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.389929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.389977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.389988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.389996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.390001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.390012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.399961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.400007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.400018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.400024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.400029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.400040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.409973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.410014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.410025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.410030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.410035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.410046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.420025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.420121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.420132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.420137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.420142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.420153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.430058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.430107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.430118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.430123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.430128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.430139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.440065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.440108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.440118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.440124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.440129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.440140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.450077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.450128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.450138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.450144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.450149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.450159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.460115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.865 [2024-10-01 16:54:57.460160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.865 [2024-10-01 16:54:57.460170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.865 [2024-10-01 16:54:57.460176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.865 [2024-10-01 16:54:57.460180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.865 [2024-10-01 16:54:57.460191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.865 qpair failed and we were unable to recover it. 00:30:05.865 [2024-10-01 16:54:57.470139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.470188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.470198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.470204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.470209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.470219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.480192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.480233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.480243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.480251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.480256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.480267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.490201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.490259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.490270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.490275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.490280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.490291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.500134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.500184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.500195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.500201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.500206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.500217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.510288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.510332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.510342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.510347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.510352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.510362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.520334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.520391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.520401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.520406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.520411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.520423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.530191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.530235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.530246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.530251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.530256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.530267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:05.866 [2024-10-01 16:54:57.540368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:05.866 [2024-10-01 16:54:57.540428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:05.866 [2024-10-01 16:54:57.540439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:05.866 [2024-10-01 16:54:57.540445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.866 [2024-10-01 16:54:57.540450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:05.866 [2024-10-01 16:54:57.540460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:05.866 qpair failed and we were unable to recover it. 00:30:06.127 [2024-10-01 16:54:57.550343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.550385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.550396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.550401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.550406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.550417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.560408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.560464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.560474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.560480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.560484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.560495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.570280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.570324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.570338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.570344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.570349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.570360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.580443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.580486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.580497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.580503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.580508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.580519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.590490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.590532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.590542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.590548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.590554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.590564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.600406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.600447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.600458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.600463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.600468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.600479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.610509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.610560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.610571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.610576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.610581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.610595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.620568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.620612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.620622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.620627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.620632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.620643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.630596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.630639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.630650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.630656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.630661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.630671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.640611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.640652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.640663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.640668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.640673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.640684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.650664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.650712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.650732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.650738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.650744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.650759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.660708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.660760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.660775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.660781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.660786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.660797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.670592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.670644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.670655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.128 [2024-10-01 16:54:57.670660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.128 [2024-10-01 16:54:57.670665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.128 [2024-10-01 16:54:57.670676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.128 qpair failed and we were unable to recover it. 00:30:06.128 [2024-10-01 16:54:57.680736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.128 [2024-10-01 16:54:57.680780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.128 [2024-10-01 16:54:57.680791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.680796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.680801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.680812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.690699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.690743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.690753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.690759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.690763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.690774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.700802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.700852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.700863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.700868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.700873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.700890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.710770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.710818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.710828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.710834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.710839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.710850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.720842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.720882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.720892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.720898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.720903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.720913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.730719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.730761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.730771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.730776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.730781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.730793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.740882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.740931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.740942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.740947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.740952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.740963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.750921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.750964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.750982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.750987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.750993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.751004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.760943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.760992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.761002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.761008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.761013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.761024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.770946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.770992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.771002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.771008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.771013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.771023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.780887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.780942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.780953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.780959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.780964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.780978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.791034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.791078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.791088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.791094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.791101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.791112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.129 [2024-10-01 16:54:57.801059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.129 [2024-10-01 16:54:57.801104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.129 [2024-10-01 16:54:57.801114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.129 [2024-10-01 16:54:57.801120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.129 [2024-10-01 16:54:57.801125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.129 [2024-10-01 16:54:57.801136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.129 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.811056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.811099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.811110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.811115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.811120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.811131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.821089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.821138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.821149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.821155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.821160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.821171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.831139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.831183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.831194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.831199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.831205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.831215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.841181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.841247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.841258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.841263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.841268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.841279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.851165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.851209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.851220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.851225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.851230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.851240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.861263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.861345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.861355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.861360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.861366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.861376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.871228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.871295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.871305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.871310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.871315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.871326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.881262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.881303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.881313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.881318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.881326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.881336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.891282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.891327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.891337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.891343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.891348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.891358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.901368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.901460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.901470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.901476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.901482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.901492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.911347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.911389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.911399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.911404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.911409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.911420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.391 [2024-10-01 16:54:57.921348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.391 [2024-10-01 16:54:57.921390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.391 [2024-10-01 16:54:57.921400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.391 [2024-10-01 16:54:57.921406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.391 [2024-10-01 16:54:57.921410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.391 [2024-10-01 16:54:57.921421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.391 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.931325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.931367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.931377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.931382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.931387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.931398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.941474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.941525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.941535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.941540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.941545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.941555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.951437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.951484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.951494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.951499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.951504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.951515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.961441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.961482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.961493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.961498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.961503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.961514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.971499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.971542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.971552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.971560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.971565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.971576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.981573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.981621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.981631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.981637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.981642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.981652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:57.991580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:57.991622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:57.991632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:57.991637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:57.991643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:57.991654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.001605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.001649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.001660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.001665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.001670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.001680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.011605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.011671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.011682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.011687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.011692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.011702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.021679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.021726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.021737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.021742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.021747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.021757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.031668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.031713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.031724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.031730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.031734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.031745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.041707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.041762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.041772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.041778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.041783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.041794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.051727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.051780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.051790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.392 [2024-10-01 16:54:58.051795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.392 [2024-10-01 16:54:58.051801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.392 [2024-10-01 16:54:58.051812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.392 qpair failed and we were unable to recover it. 00:30:06.392 [2024-10-01 16:54:58.061803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.392 [2024-10-01 16:54:58.061856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.392 [2024-10-01 16:54:58.061867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.393 [2024-10-01 16:54:58.061875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.393 [2024-10-01 16:54:58.061881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.393 [2024-10-01 16:54:58.061892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.393 qpair failed and we were unable to recover it. 00:30:06.393 [2024-10-01 16:54:58.071824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.393 [2024-10-01 16:54:58.071905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.393 [2024-10-01 16:54:58.071916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.393 [2024-10-01 16:54:58.071922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.393 [2024-10-01 16:54:58.071928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.393 [2024-10-01 16:54:58.071939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.393 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.081846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.081888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.081899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.081904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.081909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.081920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.091831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.091872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.091882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.091887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.091892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.091903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.101887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.101980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.101993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.101998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.102003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.102015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.111907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.111951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.111961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.111967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.111976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.111986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.121829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.121901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.121911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.121917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.121922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.121932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.131940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.131985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.131995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.132000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.132005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.132016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.141941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.141989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.141999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.142006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.142011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.142021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.152021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.152065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.152077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.152083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.152088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.152099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.655 qpair failed and we were unable to recover it. 00:30:06.655 [2024-10-01 16:54:58.161928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.655 [2024-10-01 16:54:58.161973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.655 [2024-10-01 16:54:58.161985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.655 [2024-10-01 16:54:58.161990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.655 [2024-10-01 16:54:58.161995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.655 [2024-10-01 16:54:58.162006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.172057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.172099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.172110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.172115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.172120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.172131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.182163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.182234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.182245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.182251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.182256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.182266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.192141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.192183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.192193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.192199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.192204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.192217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.202143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.202189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.202200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.202205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.202210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.202220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.212130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.212172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.212181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.212187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.212192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.212202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.222241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.222289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.222299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.222305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.222309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.222320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.232241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.232285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.232297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.232303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.232308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.232320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.242285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.242326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.242340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.242346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.242351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.242362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.252332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.252372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.252383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.252389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.252394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.252405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.262343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.262389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.656 [2024-10-01 16:54:58.262400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.656 [2024-10-01 16:54:58.262405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.656 [2024-10-01 16:54:58.262410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.656 [2024-10-01 16:54:58.262421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.656 qpair failed and we were unable to recover it. 00:30:06.656 [2024-10-01 16:54:58.272375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.656 [2024-10-01 16:54:58.272417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.272427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.272433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.272437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.272448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.282353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.282402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.282412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.282418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.282425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.282436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.292401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.292445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.292455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.292461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.292466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.292477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.302458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.302506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.302516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.302522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.302527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.302537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.312426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.312467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.312478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.312483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.312488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.312498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.322376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.322419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.322429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.322435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.322440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.322450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.657 [2024-10-01 16:54:58.332430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.657 [2024-10-01 16:54:58.332491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.657 [2024-10-01 16:54:58.332501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.657 [2024-10-01 16:54:58.332507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.657 [2024-10-01 16:54:58.332512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.657 [2024-10-01 16:54:58.332522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.657 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.342611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.342657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.342667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.342673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.342678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.342688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.352469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.352511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.352521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.352527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.352531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.352542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.362495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.362534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.362545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.362550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.362555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.362566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.372583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.372624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.372634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.372640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.372648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.372658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.382646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.382691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.382701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.382706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.382711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.382722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.392671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.392713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.392723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.392729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.392734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.392744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.402724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.402810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.402821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.402826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.402831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.402841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.412725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.412770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.412780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.412785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.412791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.412801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.422782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.422830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.422841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.422846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.422851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.422861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.432814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.432856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.432866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.919 [2024-10-01 16:54:58.432872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.919 [2024-10-01 16:54:58.432876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.919 [2024-10-01 16:54:58.432887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.919 qpair failed and we were unable to recover it. 00:30:06.919 [2024-10-01 16:54:58.442847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.919 [2024-10-01 16:54:58.442937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.919 [2024-10-01 16:54:58.442947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.442953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.442958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.442972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.452828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.452869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.452880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.452885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.452890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.452901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.462908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.462958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.462971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.462986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.462991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.463002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.472927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.472967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.472980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.472986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.472990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.473001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.482938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.482984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.482995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.483000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.483005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.483015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.492886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.492928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.492938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.492944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.492949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.492959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.503050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.503093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.503103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.503109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.503114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.503125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.513019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.513062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.513073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.513078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.513083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.513094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.523058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.523102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.523113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.523118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.523123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.523134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.533070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.533114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.533125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.533130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.533134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.533145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.543125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.543216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.920 [2024-10-01 16:54:58.543227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.920 [2024-10-01 16:54:58.543232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.920 [2024-10-01 16:54:58.543237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.920 [2024-10-01 16:54:58.543248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.920 qpair failed and we were unable to recover it. 00:30:06.920 [2024-10-01 16:54:58.553109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.920 [2024-10-01 16:54:58.553153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.921 [2024-10-01 16:54:58.553163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.921 [2024-10-01 16:54:58.553171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.921 [2024-10-01 16:54:58.553176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.921 [2024-10-01 16:54:58.553186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.921 qpair failed and we were unable to recover it. 00:30:06.921 [2024-10-01 16:54:58.563173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.921 [2024-10-01 16:54:58.563214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.921 [2024-10-01 16:54:58.563224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.921 [2024-10-01 16:54:58.563230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.921 [2024-10-01 16:54:58.563235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.921 [2024-10-01 16:54:58.563245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.921 qpair failed and we were unable to recover it. 00:30:06.921 [2024-10-01 16:54:58.573179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.921 [2024-10-01 16:54:58.573232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.921 [2024-10-01 16:54:58.573242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.921 [2024-10-01 16:54:58.573247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.921 [2024-10-01 16:54:58.573252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.921 [2024-10-01 16:54:58.573262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.921 qpair failed and we were unable to recover it. 00:30:06.921 [2024-10-01 16:54:58.583248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.921 [2024-10-01 16:54:58.583298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.921 [2024-10-01 16:54:58.583308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.921 [2024-10-01 16:54:58.583313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.921 [2024-10-01 16:54:58.583318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.921 [2024-10-01 16:54:58.583328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.921 qpair failed and we were unable to recover it. 00:30:06.921 [2024-10-01 16:54:58.593145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.921 [2024-10-01 16:54:58.593188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.921 [2024-10-01 16:54:58.593199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.921 [2024-10-01 16:54:58.593204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.921 [2024-10-01 16:54:58.593209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:06.921 [2024-10-01 16:54:58.593220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.921 qpair failed and we were unable to recover it. 00:30:07.181 [2024-10-01 16:54:58.603259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.181 [2024-10-01 16:54:58.603307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.181 [2024-10-01 16:54:58.603319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.181 [2024-10-01 16:54:58.603325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.181 [2024-10-01 16:54:58.603330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.181 [2024-10-01 16:54:58.603340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.181 qpair failed and we were unable to recover it. 00:30:07.181 [2024-10-01 16:54:58.613278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.181 [2024-10-01 16:54:58.613323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.181 [2024-10-01 16:54:58.613334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.181 [2024-10-01 16:54:58.613339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.181 [2024-10-01 16:54:58.613344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.181 [2024-10-01 16:54:58.613354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.181 qpair failed and we were unable to recover it. 00:30:07.181 [2024-10-01 16:54:58.623401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.181 [2024-10-01 16:54:58.623450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.181 [2024-10-01 16:54:58.623461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.181 [2024-10-01 16:54:58.623466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.181 [2024-10-01 16:54:58.623471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.181 [2024-10-01 16:54:58.623481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.633377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.633470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.633480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.633486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.633490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.633501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.643385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.643429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.643442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.643447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.643452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.643463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.653436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.653520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.653530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.653536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.653540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.653551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.663487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.663585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.663595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.663600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.663605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.663616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.673469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.673511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.673521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.673526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.673531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.673541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.683469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.683510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.683520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.683526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.683531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.683544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.693561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.693601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.693611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.693616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.693621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.693631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.703569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.703616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.703627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.703632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.703637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.703648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.713583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.713621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.713631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.713637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.713641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.713652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.723519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.723570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.723580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.723586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.723590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.723601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.733617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.733660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.733672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.733678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.733683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.733694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.743683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.743738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.743758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.743764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.743770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.743784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.753704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.753751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.753770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.182 [2024-10-01 16:54:58.753777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.182 [2024-10-01 16:54:58.753782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.182 [2024-10-01 16:54:58.753797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.182 qpair failed and we were unable to recover it. 00:30:07.182 [2024-10-01 16:54:58.763604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.182 [2024-10-01 16:54:58.763657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.182 [2024-10-01 16:54:58.763669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.763675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.763681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.763693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.773700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.773744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.773755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.773761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.773766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.773781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.783784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.783831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.783842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.783847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.783852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.783863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.793829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.793868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.793878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.793884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.793888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.793899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.803838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.803879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.803889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.803895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.803900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.803910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.813836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.813926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.813937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.813942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.813946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.813957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.823831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.823880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.823891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.823896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.823901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.823911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.833920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.833962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.833976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.833981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.833986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.833997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.843948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.843993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.844003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.844009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.844013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.844024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.183 [2024-10-01 16:54:58.853856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.183 [2024-10-01 16:54:58.853903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.183 [2024-10-01 16:54:58.853913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.183 [2024-10-01 16:54:58.853918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.183 [2024-10-01 16:54:58.853923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.183 [2024-10-01 16:54:58.853934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.183 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.864022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.864068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.864079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.864084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.864092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.864103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.874039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.874080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.874090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.874096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.874101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.874111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.884066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.884107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.884117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.884122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.884127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.884138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.894029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.894072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.894082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.894088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.894092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.894104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.904128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.904178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.904188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.904193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.904198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.904208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.914146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.914193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.914203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.914209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.914213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.914224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.924154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.924201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.924212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.924217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.924222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.924232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.934154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.934197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.934213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.934219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.934224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.934238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.444 [2024-10-01 16:54:58.944288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.444 [2024-10-01 16:54:58.944348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.444 [2024-10-01 16:54:58.944358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.444 [2024-10-01 16:54:58.944364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.444 [2024-10-01 16:54:58.944368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.444 [2024-10-01 16:54:58.944379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.444 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:58.954253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:58.954293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:58.954303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:58.954312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:58.954317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:58.954327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:58.964282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:58.964327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:58.964338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:58.964343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:58.964348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:58.964358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:58.974262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:58.974303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:58.974314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:58.974319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:58.974324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:58.974334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:58.984353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:58.984397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:58.984407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:58.984412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:58.984416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:58.984427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:58.994343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:58.994384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:58.994394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:58.994400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:58.994404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:58.994415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.004301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.004343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.004354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.004359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.004364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.004374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.014389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.014430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.014440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.014446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.014451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.014461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.024459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.024507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.024517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.024522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.024527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.024538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.034467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.034510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.034520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.034526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.034530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.034541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.044488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.044532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.044542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.044550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.044555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.044566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.054499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.054542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.054552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.054557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.054562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.054573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.064543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.064589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.064599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.064604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.064609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.064619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.074576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.074619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.074629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.445 [2024-10-01 16:54:59.074635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.445 [2024-10-01 16:54:59.074639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.445 [2024-10-01 16:54:59.074650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.445 qpair failed and we were unable to recover it. 00:30:07.445 [2024-10-01 16:54:59.084566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.445 [2024-10-01 16:54:59.084654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.445 [2024-10-01 16:54:59.084665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.446 [2024-10-01 16:54:59.084670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.446 [2024-10-01 16:54:59.084675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.446 [2024-10-01 16:54:59.084686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.446 qpair failed and we were unable to recover it. 00:30:07.446 [2024-10-01 16:54:59.094582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.446 [2024-10-01 16:54:59.094630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.446 [2024-10-01 16:54:59.094641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.446 [2024-10-01 16:54:59.094646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.446 [2024-10-01 16:54:59.094651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.446 [2024-10-01 16:54:59.094662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.446 qpair failed and we were unable to recover it. 00:30:07.446 [2024-10-01 16:54:59.104667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.446 [2024-10-01 16:54:59.104738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.446 [2024-10-01 16:54:59.104748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.446 [2024-10-01 16:54:59.104754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.446 [2024-10-01 16:54:59.104759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.446 [2024-10-01 16:54:59.104769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.446 qpair failed and we were unable to recover it. 00:30:07.446 [2024-10-01 16:54:59.114681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.446 [2024-10-01 16:54:59.114723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.446 [2024-10-01 16:54:59.114733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.446 [2024-10-01 16:54:59.114738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.446 [2024-10-01 16:54:59.114743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.446 [2024-10-01 16:54:59.114754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.446 qpair failed and we were unable to recover it. 00:30:07.446 [2024-10-01 16:54:59.124603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.446 [2024-10-01 16:54:59.124645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.446 [2024-10-01 16:54:59.124657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.446 [2024-10-01 16:54:59.124662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.446 [2024-10-01 16:54:59.124667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.446 [2024-10-01 16:54:59.124679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.446 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.134697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.134744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.134759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.134764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.134770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.134781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.144786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.144837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.144849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.144854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.144860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.144871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.154678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.154729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.154740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.154746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.154751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.154763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.164688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.164736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.164747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.164753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.164758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.164769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.174826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.174873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.174883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.174889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.174894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.174907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.184882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.184925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.184936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.184941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.707 [2024-10-01 16:54:59.184946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.707 [2024-10-01 16:54:59.184957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.707 qpair failed and we were unable to recover it. 00:30:07.707 [2024-10-01 16:54:59.194903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.707 [2024-10-01 16:54:59.194950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.707 [2024-10-01 16:54:59.194961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.707 [2024-10-01 16:54:59.194966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.194975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.194986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.204938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.204994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.205005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.205010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.205015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.205026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.214948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.214995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.215006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.215011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.215016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.215026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.225002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.225046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.225062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.225067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.225072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.225083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.234997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.235057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.235067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.235072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.235077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.235088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.245013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.245061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.245071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.245077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.245082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.245093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.254921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.254962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.254976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.254982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.254987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.254998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.265194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.265243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.265254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.265259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.265264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.265277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.275175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.275224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.275234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.275240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.275244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.275255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.285195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.285235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.285245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.285250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.285255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.285265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.295239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.295282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.295292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.295298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.295302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.295312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.305207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.305253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.305263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.305269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.305273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.305284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.315237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.315278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.315291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.315297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.315302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.315312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.708 [2024-10-01 16:54:59.325256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.708 [2024-10-01 16:54:59.325304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.708 [2024-10-01 16:54:59.325314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.708 [2024-10-01 16:54:59.325319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.708 [2024-10-01 16:54:59.325324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.708 [2024-10-01 16:54:59.325334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.708 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.335257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.335310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.335320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.335325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.335330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.335341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.345289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.345335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.345345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.345351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.345356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.345366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.355350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.355420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.355430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.355436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.355443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.355454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.365236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.365281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.365291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.365297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.365302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.365312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.375254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.375293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.375303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.375309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.375314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.375324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.709 [2024-10-01 16:54:59.385396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.709 [2024-10-01 16:54:59.385445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.709 [2024-10-01 16:54:59.385455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.709 [2024-10-01 16:54:59.385460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.709 [2024-10-01 16:54:59.385465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.709 [2024-10-01 16:54:59.385476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.709 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.395446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.395488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.395498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.395503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.395508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.395519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.405465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.405521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.405531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.405537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.405542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.405552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.415355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.415394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.415404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.415409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.415414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.415425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.425557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.425611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.425622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.425627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.425633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.425643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.435561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.435606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.435616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.435622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.435627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.435637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.445589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.445637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.445648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.445653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.445660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.445671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.455593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.455644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.455657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.455662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.455667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.455679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.465653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.465703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.465714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.465720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.465725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.465736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.475694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.475770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.475781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.970 [2024-10-01 16:54:59.475786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.970 [2024-10-01 16:54:59.475791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.970 [2024-10-01 16:54:59.475803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.970 qpair failed and we were unable to recover it. 00:30:07.970 [2024-10-01 16:54:59.485689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.970 [2024-10-01 16:54:59.485735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.970 [2024-10-01 16:54:59.485754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.485761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.485767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.485781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.495697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.495744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.495762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.495769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.495774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.495789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.505762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.505812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.505824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.505830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.505835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.505846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.515778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.515820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.515831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.515836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.515841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.515852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.525796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.525840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.525851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.525856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.525861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.525872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.535814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.535858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.535869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.535877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.535882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.535893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.545873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.545919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.545929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.545934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.545939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.545950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.555883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.555926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.555936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.555942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.555947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.555958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.565916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.565973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.565983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.565989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.565994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.566006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.575940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.575983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.575994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.576000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.576004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.576015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.585997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.586048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.586059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.586065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.586069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.586080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.595872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.595912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.595923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.595929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.595933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.595944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.606015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.606058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.606068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.606074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.606078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.971 [2024-10-01 16:54:59.606089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.971 qpair failed and we were unable to recover it. 00:30:07.971 [2024-10-01 16:54:59.616025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.971 [2024-10-01 16:54:59.616082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.971 [2024-10-01 16:54:59.616092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.971 [2024-10-01 16:54:59.616098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.971 [2024-10-01 16:54:59.616102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.972 [2024-10-01 16:54:59.616113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.972 qpair failed and we were unable to recover it. 00:30:07.972 [2024-10-01 16:54:59.626112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.972 [2024-10-01 16:54:59.626160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.972 [2024-10-01 16:54:59.626173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.972 [2024-10-01 16:54:59.626178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.972 [2024-10-01 16:54:59.626183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.972 [2024-10-01 16:54:59.626194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.972 qpair failed and we were unable to recover it. 00:30:07.972 [2024-10-01 16:54:59.636115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.972 [2024-10-01 16:54:59.636196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.972 [2024-10-01 16:54:59.636208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.972 [2024-10-01 16:54:59.636213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.972 [2024-10-01 16:54:59.636218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.972 [2024-10-01 16:54:59.636230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.972 qpair failed and we were unable to recover it. 00:30:07.972 [2024-10-01 16:54:59.646117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.972 [2024-10-01 16:54:59.646160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.972 [2024-10-01 16:54:59.646171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.972 [2024-10-01 16:54:59.646176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.972 [2024-10-01 16:54:59.646181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:07.972 [2024-10-01 16:54:59.646192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.972 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.656183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.656229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.656239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.656244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.656249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.656260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.666092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.666138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.666149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.666154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.666159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.666170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.676193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.676238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.676248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.676254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.676259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.676269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.686262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.686336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.686346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.686352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.686357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.686368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.696265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.696311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.696321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.696326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.696331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.696342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.706315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.706366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.706376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.706381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.706387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.706397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.716334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.716378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.716390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.716396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.716401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.716411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.726352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.726398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.726409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.726414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.726419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.726430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.736363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.736406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.736416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.736422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.736427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.736438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.746427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.746473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.746483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.746489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.746494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.746504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.756421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.756464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.756474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.756480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.756484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.756498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.766494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.766532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.766542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.766548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.234 [2024-10-01 16:54:59.766552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.234 [2024-10-01 16:54:59.766563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.234 qpair failed and we were unable to recover it. 00:30:08.234 [2024-10-01 16:54:59.776462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.234 [2024-10-01 16:54:59.776513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.234 [2024-10-01 16:54:59.776523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.234 [2024-10-01 16:54:59.776528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.776533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.776544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.786587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.786663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.786673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.786679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.786684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.786695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.796559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.796608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.796618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.796624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.796628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.796639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.806595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.806643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.806657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.806663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.806667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.806679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.816574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.816618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.816629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.816635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.816639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.816650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.826626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.826674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.826684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.826689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.826694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.826705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.836659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.836704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.836714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.836720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.836725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.836736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.846688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.846747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.846766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.846773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.846782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.846798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.856735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.856812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.856831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.856837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.856843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.856857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.866751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.866795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.866806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.866812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.866817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.866829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.876808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.876892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.876903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.876909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.876914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.876925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.886784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.886837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.886847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.886853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.886857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.886868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.896845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.896927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.896938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.896944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.896949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.896960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.235 [2024-10-01 16:54:59.906882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.235 [2024-10-01 16:54:59.906928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.235 [2024-10-01 16:54:59.906939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.235 [2024-10-01 16:54:59.906944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.235 [2024-10-01 16:54:59.906949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.235 [2024-10-01 16:54:59.906960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.235 qpair failed and we were unable to recover it. 00:30:08.497 [2024-10-01 16:54:59.916884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.497 [2024-10-01 16:54:59.916930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.497 [2024-10-01 16:54:59.916940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.497 [2024-10-01 16:54:59.916945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.497 [2024-10-01 16:54:59.916950] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.497 [2024-10-01 16:54:59.916961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-10-01 16:54:59.926906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.497 [2024-10-01 16:54:59.927002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.497 [2024-10-01 16:54:59.927014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.497 [2024-10-01 16:54:59.927020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.497 [2024-10-01 16:54:59.927025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.497 [2024-10-01 16:54:59.927036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-10-01 16:54:59.936893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.497 [2024-10-01 16:54:59.936933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.497 [2024-10-01 16:54:59.936944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.497 [2024-10-01 16:54:59.936949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.497 [2024-10-01 16:54:59.936958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.497 [2024-10-01 16:54:59.936973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-10-01 16:54:59.946844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.946915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.946927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.946933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.946937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.946949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:54:59.956990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.957037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.957048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.957053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.957058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.957070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:54:59.967026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.967070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.967080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.967086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.967091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.967101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:54:59.977025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.977072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.977082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.977087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.977092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.977103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:54:59.987081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.987137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.987147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.987153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.987157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.987168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:54:59.997120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:54:59.997172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:54:59.997182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:54:59.997188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:54:59.997193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:54:59.997203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.007129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.007172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.007184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.007190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.007195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.007206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.017135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.017185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.017196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.017201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.017207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.017217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.027232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.027277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.027288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.027296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.027301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.027312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.037217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.037334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.037345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.037351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.037356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.037366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.047211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.047253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.047263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.047269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.047275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.047286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.057276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.057325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.057335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.057341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.057346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.057357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-10-01 16:55:00.067306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.498 [2024-10-01 16:55:00.067356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.498 [2024-10-01 16:55:00.067366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.498 [2024-10-01 16:55:00.067372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.498 [2024-10-01 16:55:00.067377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.498 [2024-10-01 16:55:00.067387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.077280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.077324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.077335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.077341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.077346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.077357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.087346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.087429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.087439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.087444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.087450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.087460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.097387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.097430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.097441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.097447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.097452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.097463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.107480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.107546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.107556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.107562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.107567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.107578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.117436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.117523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.117534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.117543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.117548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.117559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.127454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.127497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.127507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.127513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.127518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.127528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.137480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.137525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.137535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.137540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.137545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.137556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.147550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.147599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.147609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.147614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.147619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.147630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.157534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.157580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.157590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.157595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.157600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.157611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.167502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.167546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.167557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.167563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.167568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.167578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-10-01 16:55:00.177458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.499 [2024-10-01 16:55:00.177501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.499 [2024-10-01 16:55:00.177512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.499 [2024-10-01 16:55:00.177518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.499 [2024-10-01 16:55:00.177524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.499 [2024-10-01 16:55:00.177535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.187611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.187700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.760 [2024-10-01 16:55:00.187711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.760 [2024-10-01 16:55:00.187716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.760 [2024-10-01 16:55:00.187722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.760 [2024-10-01 16:55:00.187733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.760 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.197657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.197702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.760 [2024-10-01 16:55:00.197713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.760 [2024-10-01 16:55:00.197718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.760 [2024-10-01 16:55:00.197723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.760 [2024-10-01 16:55:00.197734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.760 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.207692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.207735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.760 [2024-10-01 16:55:00.207748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.760 [2024-10-01 16:55:00.207754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.760 [2024-10-01 16:55:00.207759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.760 [2024-10-01 16:55:00.207769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.760 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.217587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.217642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.760 [2024-10-01 16:55:00.217652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.760 [2024-10-01 16:55:00.217658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.760 [2024-10-01 16:55:00.217663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.760 [2024-10-01 16:55:00.217674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.760 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.227753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.227799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.760 [2024-10-01 16:55:00.227819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.760 [2024-10-01 16:55:00.227824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.760 [2024-10-01 16:55:00.227830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.760 [2024-10-01 16:55:00.227841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.760 qpair failed and we were unable to recover it. 00:30:08.760 [2024-10-01 16:55:00.237769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.760 [2024-10-01 16:55:00.237815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.237826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.237831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.237836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.237855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.247800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.247888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.247899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.247904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.247909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.247923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.257807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.257860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.257871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.257876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.257881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.257891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.267767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.267813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.267824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.267829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.267834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.267845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.277857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.277905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.277915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.277921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.277925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.277936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.287893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.287937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.287948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.287953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.287958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.287972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.297920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.297962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.297978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.297983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.297989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.298000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.307986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.308041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.308051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.308056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.308061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.308072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.318000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.318043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.318053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.318058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.318063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.318073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.328006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.328058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.328068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.328074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.328079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.328089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.337919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.337974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.337984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.337990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.337995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.338009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.348061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.348112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.348122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.348128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.348133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.348143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.358004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.358052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.358062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.358067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.358072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.358083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.761 qpair failed and we were unable to recover it. 00:30:08.761 [2024-10-01 16:55:00.368012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.761 [2024-10-01 16:55:00.368052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.761 [2024-10-01 16:55:00.368062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.761 [2024-10-01 16:55:00.368068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.761 [2024-10-01 16:55:00.368073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.761 [2024-10-01 16:55:00.368084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.378016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.378059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.378070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.378075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.378080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.378091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.388104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.388197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.388207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.388213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.388218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.388229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.398248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.398294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.398304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.398309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.398314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.398324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.408222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.408265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.408275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.408280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.408285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.408295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.418243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.418285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.418295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.418300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.418305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.418315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.428332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.428403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.428413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.428418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.428426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.428436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:08.762 [2024-10-01 16:55:00.438309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.762 [2024-10-01 16:55:00.438352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.762 [2024-10-01 16:55:00.438362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.762 [2024-10-01 16:55:00.438368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.762 [2024-10-01 16:55:00.438372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:08.762 [2024-10-01 16:55:00.438383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.762 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.448365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.448406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.448417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.448422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.448427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.448438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.458354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.458401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.458411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.458416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.458421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.458432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.468408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.468458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.468468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.468474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.468479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.468489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.478424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.478471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.478482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.478487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.478493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.478503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.488472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.488516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.488526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.488531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.488536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.488546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.498476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.498526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.498536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.498541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.498546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.498557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.508408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.508457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.508467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.508473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.508478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.508488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.518547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.518589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.518598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.518606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.518611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.518622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.528584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.528629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.528639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.528645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.528649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.528660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.538593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.025 [2024-10-01 16:55:00.538647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.025 [2024-10-01 16:55:00.538657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.025 [2024-10-01 16:55:00.538663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.025 [2024-10-01 16:55:00.538668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.025 [2024-10-01 16:55:00.538678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.025 qpair failed and we were unable to recover it. 00:30:09.025 [2024-10-01 16:55:00.548643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.548692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.548702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.548708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.548712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.548723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.558649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.558702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.558712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.558718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.558723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.558733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.568648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.568690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.568700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.568706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.568711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.568721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.578574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.578618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.578628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.578634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.578638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.578649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.588748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.588798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.588808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.588814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.588818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.588829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.598759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.598814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.598824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.598830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.598834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.598845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.608678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.608725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.608735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.608743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.608748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.608759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.618816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.618859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.618869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.618875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.618879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.618890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.628865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.628909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.628920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.628925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.628930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.628940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.638854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.638895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.638905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.638910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.638915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.638926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.648933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.648977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.648987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.648993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.648998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.649008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.658908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.658959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.658972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.658978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.658983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.658993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.668979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.669021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.669032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.669037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.669042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.669053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.678977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.679018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.679029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.679034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.679039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.679050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.689018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.689066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.689076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.689082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.689087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.689098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.026 [2024-10-01 16:55:00.699028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.026 [2024-10-01 16:55:00.699072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.026 [2024-10-01 16:55:00.699085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.026 [2024-10-01 16:55:00.699091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.026 [2024-10-01 16:55:00.699096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.026 [2024-10-01 16:55:00.699106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.026 qpair failed and we were unable to recover it. 00:30:09.293 [2024-10-01 16:55:00.709108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.293 [2024-10-01 16:55:00.709165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.293 [2024-10-01 16:55:00.709175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.293 [2024-10-01 16:55:00.709180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.293 [2024-10-01 16:55:00.709185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.293 [2024-10-01 16:55:00.709197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-10-01 16:55:00.719107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.293 [2024-10-01 16:55:00.719155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.293 [2024-10-01 16:55:00.719165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.293 [2024-10-01 16:55:00.719171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.293 [2024-10-01 16:55:00.719176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.293 [2024-10-01 16:55:00.719186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-10-01 16:55:00.729110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.293 [2024-10-01 16:55:00.729156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.293 [2024-10-01 16:55:00.729166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.293 [2024-10-01 16:55:00.729172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.293 [2024-10-01 16:55:00.729177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.293 [2024-10-01 16:55:00.729187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-10-01 16:55:00.739130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.293 [2024-10-01 16:55:00.739176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.293 [2024-10-01 16:55:00.739186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.293 [2024-10-01 16:55:00.739192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.293 [2024-10-01 16:55:00.739197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.293 [2024-10-01 16:55:00.739214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-10-01 16:55:00.749201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.293 [2024-10-01 16:55:00.749244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.293 [2024-10-01 16:55:00.749255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.293 [2024-10-01 16:55:00.749260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.293 [2024-10-01 16:55:00.749265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.293 [2024-10-01 16:55:00.749276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.759210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.759252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.759263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.759268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.759273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.759283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.769238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.769288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.769298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.769303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.769308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.769318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.779239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.779290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.779300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.779305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.779310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.779320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.789310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.789357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.789370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.789376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.789381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.789391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.799198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.799243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.799253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.799259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.799264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.799274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.809217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.809264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.809274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.809280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.809285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.809295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.819344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.819387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.819397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.819402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.819407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.819418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.829407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.829451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.829462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.829468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.829473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.829486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.839420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.839463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.839473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.839479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.839484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.839494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.849475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.849518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.849528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.849534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.849539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.849549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.859438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.859484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.859494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.859499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.859504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.859514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.869493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.869559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.869569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.869575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.869580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.869590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-10-01 16:55:00.879544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.294 [2024-10-01 16:55:00.879586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.294 [2024-10-01 16:55:00.879599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.294 [2024-10-01 16:55:00.879604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.294 [2024-10-01 16:55:00.879609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.294 [2024-10-01 16:55:00.879619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.889566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.889609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.889619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.889625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.889630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.889640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.899435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.899478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.899488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.899493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.899498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.899509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.909591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.909635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.909645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.909650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.909655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.909665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.919652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.919739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.919749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.919754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.919762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.919772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.929559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.929609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.929620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.929625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.929630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.929641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.939670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.939722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.939733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.939738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.939743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.939754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.949711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.949759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.949778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.949784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.949790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.949804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.959786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.959865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.959884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.959890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.959896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.959910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-10-01 16:55:00.969828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.295 [2024-10-01 16:55:00.969873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.295 [2024-10-01 16:55:00.969885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.295 [2024-10-01 16:55:00.969891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.295 [2024-10-01 16:55:00.969896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.295 [2024-10-01 16:55:00.969907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.673 [2024-10-01 16:55:00.979786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.673 [2024-10-01 16:55:00.979835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.673 [2024-10-01 16:55:00.979846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.673 [2024-10-01 16:55:00.979851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.673 [2024-10-01 16:55:00.979856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.673 [2024-10-01 16:55:00.979867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-10-01 16:55:00.989813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.673 [2024-10-01 16:55:00.989865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.673 [2024-10-01 16:55:00.989876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.673 [2024-10-01 16:55:00.989881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.673 [2024-10-01 16:55:00.989886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.673 [2024-10-01 16:55:00.989897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-10-01 16:55:00.999840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.673 [2024-10-01 16:55:00.999885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.673 [2024-10-01 16:55:00.999896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.673 [2024-10-01 16:55:00.999901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.673 [2024-10-01 16:55:00.999906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.673 [2024-10-01 16:55:00.999916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-10-01 16:55:01.009841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.673 [2024-10-01 16:55:01.009886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.673 [2024-10-01 16:55:01.009896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.673 [2024-10-01 16:55:01.009902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.673 [2024-10-01 16:55:01.009910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.673 [2024-10-01 16:55:01.009921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.673 qpair failed and we were unable to recover it. 00:30:09.673 [2024-10-01 16:55:01.019863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.019905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.019916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.019921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.019926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.019937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.029945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.029993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.030004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.030010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.030014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.030026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.039940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.039982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.039993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.040000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.040005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.040016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.049957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.050002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.050012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.050017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.050022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.050033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.059875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.059928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.059939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.059945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.059949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.059961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.070057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.070117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.070127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.070132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.070138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.070149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.080063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.080124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.080134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.080140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.080145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.080155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.090124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.090166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.090176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.090182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.090187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.090197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.100075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.100119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.100130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.100139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.100144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.100155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.110184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.110231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.110242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.110247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.110252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.110262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.120207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.120251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.120261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.120266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.120271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.120282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.130224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.130267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.130278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.130283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.130288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.130299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.140239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.140280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.140290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.140295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.140300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.674 [2024-10-01 16:55:01.140311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.674 qpair failed and we were unable to recover it. 00:30:09.674 [2024-10-01 16:55:01.150273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.674 [2024-10-01 16:55:01.150346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.674 [2024-10-01 16:55:01.150356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.674 [2024-10-01 16:55:01.150361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.674 [2024-10-01 16:55:01.150366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.150377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.160300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.160343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.160353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.160359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.160364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.160374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.170356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.170403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.170413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.170418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.170423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.170434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.180315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.180358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.180369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.180374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.180379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.180389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.190395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.190448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.190461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.190466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.190471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.190482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.200424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.200464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.200474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.200480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.200484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.200495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.210469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.210549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.210559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.210565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.210570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.210581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.220377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.220421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.220431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.220437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.220442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.220453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.230477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.230521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.230531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.230537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.230541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.230552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.240517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.240556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.240567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.240572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.240577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.240587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.250579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.250624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.250634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.250640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.250644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.250655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.260556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.260597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.260607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.260612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.260617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.260628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.270617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.270667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.270677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.270683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.270688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.270698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.280621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.280705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.675 [2024-10-01 16:55:01.280718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.675 [2024-10-01 16:55:01.280724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.675 [2024-10-01 16:55:01.280729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.675 [2024-10-01 16:55:01.280740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.675 qpair failed and we were unable to recover it. 00:30:09.675 [2024-10-01 16:55:01.290644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.675 [2024-10-01 16:55:01.290690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.290701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.290707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.290712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.290723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.300663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.300707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.300717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.300723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.300728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.300739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.310742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.310812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.310823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.310829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.310834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.310845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.320731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.320781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.320793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.320802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.320808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.320823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.330642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.330685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.330696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.330702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.330707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.330718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.340699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.340743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.340753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.340758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.340763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.340775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.676 [2024-10-01 16:55:01.350841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.676 [2024-10-01 16:55:01.350887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.676 [2024-10-01 16:55:01.350898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.676 [2024-10-01 16:55:01.350904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.676 [2024-10-01 16:55:01.350908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.676 [2024-10-01 16:55:01.350919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.676 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.360803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.360848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.360859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.360864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.360870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.360880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.370747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.370792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.370806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.370811] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.370816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.370827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.380753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.380796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.380806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.380812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.380817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.380827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.390906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.390951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.390961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.390966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.390977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.390988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.400952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.401003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.401014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.401019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.401024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.401036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.410958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.411003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.938 [2024-10-01 16:55:01.411013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.938 [2024-10-01 16:55:01.411019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.938 [2024-10-01 16:55:01.411027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.938 [2024-10-01 16:55:01.411038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.938 qpair failed and we were unable to recover it. 00:30:09.938 [2024-10-01 16:55:01.420860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.938 [2024-10-01 16:55:01.420901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.420911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.420916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.420921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.420932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.431059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.431106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.431117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.431122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.431127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.431138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.441095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.441134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.441144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.441150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.441155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.441166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.451143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.451222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.451232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.451237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.451243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.451254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.461098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.461143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.461154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.461159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.461164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.461175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.471074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.471129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.471140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.471145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.471150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.471161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.481053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.481097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.481107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.481113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.481118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.481128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.491226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.491272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.491282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.491288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.491293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.491303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.501197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.501241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.501251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.501257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.501268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.501279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.511244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.511289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.511299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.511305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.511310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.511320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.521290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.521336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.521346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.521352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.521357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.521367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.531305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.531347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.531357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.531362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.531367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.531378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.541317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.541360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.541370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.541375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.541380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.541391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.551242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.551291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.551302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.551307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.551312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.551322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.561268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.561312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.561322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.561328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.561333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.561344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.571417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.571475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.571485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.571491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.571496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.571506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.581439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.581523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.581533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.581538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.581544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.581554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.591363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.591407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.591417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.939 [2024-10-01 16:55:01.591425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.939 [2024-10-01 16:55:01.591430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.939 [2024-10-01 16:55:01.591441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.939 qpair failed and we were unable to recover it. 00:30:09.939 [2024-10-01 16:55:01.601496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.939 [2024-10-01 16:55:01.601538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.939 [2024-10-01 16:55:01.601548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.940 [2024-10-01 16:55:01.601553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.940 [2024-10-01 16:55:01.601558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.940 [2024-10-01 16:55:01.601569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.940 qpair failed and we were unable to recover it. 00:30:09.940 [2024-10-01 16:55:01.611538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.940 [2024-10-01 16:55:01.611601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.940 [2024-10-01 16:55:01.611611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.940 [2024-10-01 16:55:01.611617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.940 [2024-10-01 16:55:01.611621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:09.940 [2024-10-01 16:55:01.611632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.940 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.621524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.621568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.621578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.621583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.621588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.621599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.631580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.631631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.631641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.631647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.631654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.631665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.641611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.641653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.641664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.641669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.641674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.641685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.651647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.651687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.651697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.651702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.651708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.651718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.661657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.661698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.661709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.661715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.661720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.661730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.671712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.671800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.671811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.671817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.671822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.671833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.681730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.681776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.681795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.681806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.681812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.681826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.691746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.691797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.691816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.691822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.691828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.691842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.701734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.701777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.701789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.701794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.701799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.701811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.711812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.711861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.711872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.711877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.711882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.711894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.721831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.721882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.721892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.721898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.721902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.721913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.202 qpair failed and we were unable to recover it. 00:30:10.202 [2024-10-01 16:55:01.731862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.202 [2024-10-01 16:55:01.731903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.202 [2024-10-01 16:55:01.731915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.202 [2024-10-01 16:55:01.731921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.202 [2024-10-01 16:55:01.731926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.202 [2024-10-01 16:55:01.731937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.741872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.741930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.741941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.741946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.741952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.741962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.751912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.751983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.751994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.752000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.752005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.752017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.761962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.762014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.762024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.762030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.762035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.762046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.771857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.771896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.771910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.771916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.771921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.771932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.781993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.782036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.782047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.782052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.782058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.782068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.792044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.792102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.792112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.792118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.792123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.792134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.801987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.802030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.802041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.802046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.802051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.802062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.811974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.812022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.812032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.812038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.812043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.812057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.822152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.822231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.822241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.822246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.822251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.822263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.832150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.832239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.832250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.832256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.832261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.832273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.842078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.842129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.842139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.842145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.842149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.842160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.852208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.852262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.852272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.852278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.852283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.852293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.862127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.862184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.862199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.862204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.862209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.862220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.872347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.872429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.872440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.872445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.872451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.872461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.203 [2024-10-01 16:55:01.882274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.203 [2024-10-01 16:55:01.882316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.203 [2024-10-01 16:55:01.882326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.203 [2024-10-01 16:55:01.882332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.203 [2024-10-01 16:55:01.882336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.203 [2024-10-01 16:55:01.882347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.203 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.892344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.892389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.892400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.892405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.892410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.892420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.902339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.902378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.902389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.902394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.902399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.902413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.912390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.912436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.912446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.912452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.912456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.912467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.922362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.922407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.922417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.922422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.922428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.922438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.932411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.932496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.932507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.932513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.932518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.932528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.942422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.942513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.942524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.942529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.942534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.942545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.952470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.952520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.952530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.952536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.952541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.952551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.962493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.962539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.962549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.962554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.962559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.962570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.972517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.972560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.972570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.972576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.972580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.972591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.982525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.982569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.982579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.982585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.982590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.982600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:01.992565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:01.992617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:01.992627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:01.992633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:01.992641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:01.992651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:02.002599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:02.002645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:02.002655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:02.002661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:02.002665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:02.002676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:02.012629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:02.012671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:02.012681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:02.012687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:02.012691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:02.012702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:02.022629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:02.022692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:02.022702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:02.022708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:02.022712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.464 [2024-10-01 16:55:02.022723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-10-01 16:55:02.032695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.464 [2024-10-01 16:55:02.032778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.464 [2024-10-01 16:55:02.032798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.464 [2024-10-01 16:55:02.032804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.464 [2024-10-01 16:55:02.032809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.032825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.042706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.042765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.042785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.042791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.042797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.042811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.052734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.052775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.052787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.052793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.052798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.052809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.062708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.062754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.062765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.062770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.062775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.062786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.072797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.072842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.072852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.072858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.072863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.072874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.082803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.082859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.082869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.082878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.082883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.082894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.092832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.092882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.092893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.092899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.092904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.092915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.102835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.102874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.102884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.102890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.102895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.102905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.112919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.112965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.112979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.112985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.112990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.113001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.122923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.122966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.122980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.122985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.122990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.123001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.132987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.133039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.133049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.133055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.133060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.133071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-10-01 16:55:02.142961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.465 [2024-10-01 16:55:02.143009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.465 [2024-10-01 16:55:02.143019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.465 [2024-10-01 16:55:02.143024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.465 [2024-10-01 16:55:02.143029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.465 [2024-10-01 16:55:02.143040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.153013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.153056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.153066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.153071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.153076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.153087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.163018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.163069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.163080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.163085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.163090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.163101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.173057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.173145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.173155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.173164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.173169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.173180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.183053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.183096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.183106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.183112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.183117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.183127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.193137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.193183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.193193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.193198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.193203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.193213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.203135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.203177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.203187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.203192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.203197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.203207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.213153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.213193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.213203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.213208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.213213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.213223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.223193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.223235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.223245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.223250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.223256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.223266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.233123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.233163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.233173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.233179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.233183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.233194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.243280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.243324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.243335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.243340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.243345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.243356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.253257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.253296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.253306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.253311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.253316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.253326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.263343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.263387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.263405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.263414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.263419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.263430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.273363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.273413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.273424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.726 [2024-10-01 16:55:02.273430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.726 [2024-10-01 16:55:02.273435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.726 [2024-10-01 16:55:02.273446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.726 qpair failed and we were unable to recover it. 00:30:10.726 [2024-10-01 16:55:02.283361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.726 [2024-10-01 16:55:02.283411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.726 [2024-10-01 16:55:02.283422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.283427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.283432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.283442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.293350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.293390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.293401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.293406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.293411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.293421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.303263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.303321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.303331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.303337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.303342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.303355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.313429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.313475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.313485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.313490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.313495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.313505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.323475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.323526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.323536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.323541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.323546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.323557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.333495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.333542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.333552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.333557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.333562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.333573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.343496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.343549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.343559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.343564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.343569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.343579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.353538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.353588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.353601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.353607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.353611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde70000b90 00:30:10.727 [2024-10-01 16:55:02.353622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.363579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.363672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.363692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.363699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.363704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde68000b90 00:30:10.727 [2024-10-01 16:55:02.363722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.373592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.373633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.373645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.373651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.373656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde68000b90 00:30:10.727 [2024-10-01 16:55:02.373668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.383623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.383669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.383688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.383694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.383700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde64000b90 00:30:10.727 [2024-10-01 16:55:02.383715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.393691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.727 [2024-10-01 16:55:02.393793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.727 [2024-10-01 16:55:02.393805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.727 [2024-10-01 16:55:02.393810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.727 [2024-10-01 16:55:02.393815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fde64000b90 00:30:10.727 [2024-10-01 16:55:02.393831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:10.727 qpair failed and we were unable to recover it. 00:30:10.727 [2024-10-01 16:55:02.394025] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:10.727 A controller has encountered a failure and is being reset. 00:30:10.727 [2024-10-01 16:55:02.394153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2051960 (9): Bad file descriptor 00:30:10.987 Controller properly reset. 00:30:10.987 Initializing NVMe Controllers 00:30:10.987 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:10.987 Initialization complete. Launching workers. 00:30:10.987 Starting thread on core 1 00:30:10.987 Starting thread on core 2 00:30:10.987 Starting thread on core 3 00:30:10.987 Starting thread on core 0 00:30:10.987 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:10.987 00:30:10.987 real 0m10.803s 00:30:10.987 user 0m19.617s 00:30:10.987 sys 0m3.503s 00:30:10.987 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.988 ************************************ 00:30:10.988 END TEST nvmf_target_disconnect_tc2 00:30:10.988 ************************************ 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.988 rmmod nvme_tcp 00:30:10.988 rmmod nvme_fabrics 00:30:10.988 rmmod nvme_keyring 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2865858 ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2865858 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2865858 ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2865858 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2865858 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2865858' 00:30:10.988 killing process with pid 2865858 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2865858 00:30:10.988 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2865858 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.248 16:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.159 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.159 00:30:13.159 real 0m20.860s 00:30:13.159 user 0m47.378s 00:30:13.159 sys 0m9.428s 00:30:13.159 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:13.159 16:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:13.159 ************************************ 00:30:13.159 END TEST nvmf_target_disconnect 00:30:13.159 ************************************ 00:30:13.418 16:55:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:13.418 00:30:13.418 real 6m31.107s 00:30:13.418 user 11m19.382s 00:30:13.418 sys 2m9.308s 00:30:13.419 16:55:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:13.419 16:55:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.419 ************************************ 00:30:13.419 END TEST nvmf_host 00:30:13.419 ************************************ 00:30:13.419 16:55:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:13.419 16:55:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:13.419 16:55:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:13.419 16:55:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:13.419 16:55:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.419 16:55:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.419 ************************************ 00:30:13.419 START TEST nvmf_target_core_interrupt_mode 00:30:13.419 ************************************ 00:30:13.419 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:13.419 * Looking for test storage... 00:30:13.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:13.419 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:13.419 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:13.419 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.682 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.683 --rc genhtml_branch_coverage=1 00:30:13.683 --rc genhtml_function_coverage=1 00:30:13.683 --rc genhtml_legend=1 00:30:13.683 --rc geninfo_all_blocks=1 00:30:13.683 --rc geninfo_unexecuted_blocks=1 00:30:13.683 00:30:13.683 ' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.683 --rc genhtml_branch_coverage=1 00:30:13.683 --rc genhtml_function_coverage=1 00:30:13.683 --rc genhtml_legend=1 00:30:13.683 --rc geninfo_all_blocks=1 00:30:13.683 --rc geninfo_unexecuted_blocks=1 00:30:13.683 00:30:13.683 ' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.683 --rc genhtml_branch_coverage=1 00:30:13.683 --rc genhtml_function_coverage=1 00:30:13.683 --rc genhtml_legend=1 00:30:13.683 --rc geninfo_all_blocks=1 00:30:13.683 --rc geninfo_unexecuted_blocks=1 00:30:13.683 00:30:13.683 ' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.683 --rc genhtml_branch_coverage=1 00:30:13.683 --rc genhtml_function_coverage=1 00:30:13.683 --rc genhtml_legend=1 00:30:13.683 --rc geninfo_all_blocks=1 00:30:13.683 --rc geninfo_unexecuted_blocks=1 00:30:13.683 00:30:13.683 ' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.683 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.684 ************************************ 00:30:13.684 START TEST nvmf_abort 00:30:13.684 ************************************ 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:13.684 * Looking for test storage... 00:30:13.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:30:13.684 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:13.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.945 --rc genhtml_branch_coverage=1 00:30:13.945 --rc genhtml_function_coverage=1 00:30:13.945 --rc genhtml_legend=1 00:30:13.945 --rc geninfo_all_blocks=1 00:30:13.945 --rc geninfo_unexecuted_blocks=1 00:30:13.945 00:30:13.945 ' 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:13.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.945 --rc genhtml_branch_coverage=1 00:30:13.945 --rc genhtml_function_coverage=1 00:30:13.945 --rc genhtml_legend=1 00:30:13.945 --rc geninfo_all_blocks=1 00:30:13.945 --rc geninfo_unexecuted_blocks=1 00:30:13.945 00:30:13.945 ' 00:30:13.945 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:13.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.945 --rc genhtml_branch_coverage=1 00:30:13.945 --rc genhtml_function_coverage=1 00:30:13.945 --rc genhtml_legend=1 00:30:13.946 --rc geninfo_all_blocks=1 00:30:13.946 --rc geninfo_unexecuted_blocks=1 00:30:13.946 00:30:13.946 ' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:13.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.946 --rc genhtml_branch_coverage=1 00:30:13.946 --rc genhtml_function_coverage=1 00:30:13.946 --rc genhtml_legend=1 00:30:13.946 --rc geninfo_all_blocks=1 00:30:13.946 --rc geninfo_unexecuted_blocks=1 00:30:13.946 00:30:13.946 ' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.946 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:22.078 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:22.079 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:22.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:22.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:22.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:30:22.079 00:30:22.079 --- 10.0.0.2 ping statistics --- 00:30:22.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.079 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:30:22.079 00:30:22.079 --- 10.0.0.1 ping statistics --- 00:30:22.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.079 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:22.079 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2870940 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2870940 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2870940 ']' 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.080 16:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 [2024-10-01 16:55:13.014900] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.080 [2024-10-01 16:55:13.015948] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:30:22.080 [2024-10-01 16:55:13.016012] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.080 [2024-10-01 16:55:13.078265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.080 [2024-10-01 16:55:13.144642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.080 [2024-10-01 16:55:13.144682] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.080 [2024-10-01 16:55:13.144688] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.080 [2024-10-01 16:55:13.144693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.080 [2024-10-01 16:55:13.144701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.080 [2024-10-01 16:55:13.144829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.080 [2024-10-01 16:55:13.144965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.080 [2024-10-01 16:55:13.144966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.080 [2024-10-01 16:55:13.210267] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.080 [2024-10-01 16:55:13.210322] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.080 [2024-10-01 16:55:13.210635] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:22.080 [2024-10-01 16:55:13.210667] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 [2024-10-01 16:55:13.290000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 Malloc0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 Delay0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 [2024-10-01 16:55:13.357745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.080 16:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:22.080 [2024-10-01 16:55:13.524020] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:23.988 Initializing NVMe Controllers 00:30:23.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:23.989 controller IO queue size 128 less than required 00:30:23.989 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:23.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:23.989 Initialization complete. Launching workers. 00:30:23.989 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31313 00:30:23.989 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31374, failed to submit 66 00:30:23.989 success 31313, unsuccessful 61, failed 0 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.989 rmmod nvme_tcp 00:30:23.989 rmmod nvme_fabrics 00:30:23.989 rmmod nvme_keyring 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2870940 ']' 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2870940 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2870940 ']' 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2870940 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.989 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2870940 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2870940' 00:30:24.249 killing process with pid 2870940 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2870940 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2870940 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.249 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.790 00:30:26.790 real 0m12.673s 00:30:26.790 user 0m10.866s 00:30:26.790 sys 0m6.665s 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:26.790 ************************************ 00:30:26.790 END TEST nvmf_abort 00:30:26.790 ************************************ 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.790 ************************************ 00:30:26.790 START TEST nvmf_ns_hotplug_stress 00:30:26.790 ************************************ 00:30:26.790 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:26.790 * Looking for test storage... 00:30:26.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.790 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:26.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.791 --rc genhtml_branch_coverage=1 00:30:26.791 --rc genhtml_function_coverage=1 00:30:26.791 --rc genhtml_legend=1 00:30:26.791 --rc geninfo_all_blocks=1 00:30:26.791 --rc geninfo_unexecuted_blocks=1 00:30:26.791 00:30:26.791 ' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:26.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.791 --rc genhtml_branch_coverage=1 00:30:26.791 --rc genhtml_function_coverage=1 00:30:26.791 --rc genhtml_legend=1 00:30:26.791 --rc geninfo_all_blocks=1 00:30:26.791 --rc geninfo_unexecuted_blocks=1 00:30:26.791 00:30:26.791 ' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:26.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.791 --rc genhtml_branch_coverage=1 00:30:26.791 --rc genhtml_function_coverage=1 00:30:26.791 --rc genhtml_legend=1 00:30:26.791 --rc geninfo_all_blocks=1 00:30:26.791 --rc geninfo_unexecuted_blocks=1 00:30:26.791 00:30:26.791 ' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:26.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.791 --rc genhtml_branch_coverage=1 00:30:26.791 --rc genhtml_function_coverage=1 00:30:26.791 --rc genhtml_legend=1 00:30:26.791 --rc geninfo_all_blocks=1 00:30:26.791 --rc geninfo_unexecuted_blocks=1 00:30:26.791 00:30:26.791 ' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.791 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.923 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:34.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:34.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:34.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:34.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:30:34.924 00:30:34.924 --- 10.0.0.2 ping statistics --- 00:30:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.924 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:30:34.924 00:30:34.924 --- 10.0.0.1 ping statistics --- 00:30:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.924 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.924 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2875396 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2875396 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2875396 ']' 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.925 16:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 [2024-10-01 16:55:25.953234] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.925 [2024-10-01 16:55:25.954353] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:30:34.925 [2024-10-01 16:55:25.954407] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.925 [2024-10-01 16:55:26.018260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:34.925 [2024-10-01 16:55:26.079475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.925 [2024-10-01 16:55:26.079513] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.925 [2024-10-01 16:55:26.079520] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.925 [2024-10-01 16:55:26.079525] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.925 [2024-10-01 16:55:26.079530] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.925 [2024-10-01 16:55:26.079636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.925 [2024-10-01 16:55:26.079751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.925 [2024-10-01 16:55:26.079753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.925 [2024-10-01 16:55:26.147224] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.925 [2024-10-01 16:55:26.147279] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.925 [2024-10-01 16:55:26.147521] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:34.925 [2024-10-01 16:55:26.147579] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:34.925 [2024-10-01 16:55:26.400185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.925 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:35.185 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.185 [2024-10-01 16:55:26.832666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.185 16:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:35.444 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:35.704 Malloc0 00:30:35.704 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:35.963 Delay0 00:30:35.963 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.964 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:36.223 NULL1 00:30:36.223 16:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:36.482 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2875810 00:30:36.482 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:36.482 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:36.482 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.742 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.742 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:36.742 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:37.002 true 00:30:37.002 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:37.002 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.261 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.521 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:37.521 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:37.521 true 00:30:37.786 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:37.786 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.786 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.045 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:38.045 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:38.305 true 00:30:38.305 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:38.305 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.305 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.564 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:38.564 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:38.823 true 00:30:38.823 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:38.823 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.084 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.084 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:39.084 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:39.343 true 00:30:39.343 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:39.343 16:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.602 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.862 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:39.862 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:39.862 true 00:30:39.862 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:39.862 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.123 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.385 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:40.385 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:40.385 true 00:30:40.645 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:40.645 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.645 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.905 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:40.905 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:41.165 true 00:30:41.165 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:41.165 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.165 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.425 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:41.425 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:41.686 true 00:30:41.686 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:41.686 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.950 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.211 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:42.211 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:42.211 true 00:30:42.211 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:42.211 16:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.471 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.732 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:42.732 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:42.991 true 00:30:42.991 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:42.991 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.991 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.250 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:43.250 16:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:43.510 true 00:30:43.510 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:43.510 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.769 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.769 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:43.769 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:44.029 true 00:30:44.029 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:44.029 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.288 16:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.548 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:44.548 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:44.548 true 00:30:44.548 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:44.548 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.808 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.068 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:45.068 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:45.328 true 00:30:45.328 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:45.328 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.328 16:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.589 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:45.589 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:45.849 true 00:30:45.849 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:45.849 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.109 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.369 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:46.369 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:46.369 true 00:30:46.369 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:46.369 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.629 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.890 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:46.890 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:46.890 true 00:30:46.890 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:46.890 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.149 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.408 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:47.408 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:47.669 true 00:30:47.669 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:47.669 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.928 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.928 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:47.928 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:48.188 true 00:30:48.188 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:48.188 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.449 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.449 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:48.449 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:48.708 true 00:30:48.708 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:48.708 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.969 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.229 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:49.229 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:49.229 true 00:30:49.229 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:49.229 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.489 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.748 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:49.748 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:50.008 true 00:30:50.008 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:50.008 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.268 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.268 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:50.268 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:50.528 true 00:30:50.528 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:50.528 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.788 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.049 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:51.049 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:51.049 true 00:30:51.049 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:51.049 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.308 16:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.568 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:51.568 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:51.828 true 00:30:51.828 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:51.828 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.828 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.088 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:52.088 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:52.347 true 00:30:52.347 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:52.347 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.608 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.608 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:52.608 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:52.868 true 00:30:52.868 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:52.868 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.128 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.388 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:53.388 16:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:53.388 true 00:30:53.388 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:53.388 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.648 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.908 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:53.908 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:54.169 true 00:30:54.169 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:54.169 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.169 16:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.429 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:54.429 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:54.690 true 00:30:54.690 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:54.690 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.950 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.950 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:54.951 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:55.219 true 00:30:55.219 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:55.219 16:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.479 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.739 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:55.739 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:55.739 true 00:30:55.739 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:55.739 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.003 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.264 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:56.264 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:56.525 true 00:30:56.525 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:56.525 16:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.525 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.785 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:56.785 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:57.045 true 00:30:57.045 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:57.045 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.305 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.305 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:57.305 16:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:57.564 true 00:30:57.564 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:57.564 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.823 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.823 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:57.823 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:58.084 true 00:30:58.084 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:58.084 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.344 16:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.605 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:58.605 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:58.605 true 00:30:58.605 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:58.605 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.865 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.126 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:59.126 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:59.126 true 00:30:59.126 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:59.126 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.387 16:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.647 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:59.647 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:59.907 true 00:30:59.907 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:30:59.907 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.907 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.167 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:00.168 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:00.428 true 00:31:00.428 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:00.428 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.688 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.688 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:00.688 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:00.948 true 00:31:00.948 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:00.948 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.208 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.208 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:01.208 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:01.469 true 00:31:01.469 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:01.469 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.729 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.989 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:01.989 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:01.989 true 00:31:01.990 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:01.990 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.250 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.509 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:02.509 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:02.770 true 00:31:02.770 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:02.770 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.031 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.031 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:03.031 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:03.291 true 00:31:03.291 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:03.291 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.551 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.827 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:03.827 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:03.827 true 00:31:03.827 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:03.827 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.119 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.396 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:04.396 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:04.692 true 00:31:04.692 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:04.692 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.692 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.963 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:04.963 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:04.963 true 00:31:04.963 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:04.963 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.222 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.481 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:05.481 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:05.481 true 00:31:05.740 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:05.740 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.740 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.001 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:06.001 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:06.261 true 00:31:06.261 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:06.261 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.261 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.522 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:06.522 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:06.782 true 00:31:06.782 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:06.782 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.782 Initializing NVMe Controllers 00:31:06.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.782 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:31:06.782 Controller IO queue size 128, less than required. 00:31:06.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.782 WARNING: Some requested NVMe devices were skipped 00:31:06.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:06.782 Initialization complete. Launching workers. 00:31:06.782 ======================================================== 00:31:06.783 Latency(us) 00:31:06.783 Device Information : IOPS MiB/s Average min max 00:31:06.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30608.93 14.95 4181.74 1316.02 8071.16 00:31:06.783 ======================================================== 00:31:06.783 Total : 30608.93 14.95 4181.74 1316.02 8071.16 00:31:06.783 00:31:07.043 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.043 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:07.043 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:07.303 true 00:31:07.303 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2875810 00:31:07.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2875810) - No such process 00:31:07.303 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2875810 00:31:07.303 16:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.562 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:07.821 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:07.821 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:07.822 null0 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:07.822 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:08.081 null1 00:31:08.081 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:08.081 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:08.081 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:08.342 null2 00:31:08.342 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:08.342 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:08.342 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:08.602 null3 00:31:08.602 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:08.602 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:08.602 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:08.602 null4 00:31:08.602 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:08.603 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:08.603 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:08.862 null5 00:31:08.862 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:08.862 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:08.862 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:09.123 null6 00:31:09.123 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.123 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.123 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:09.385 null7 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:09.385 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:09.386 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2881320 2881322 2881324 2881325 2881327 2881329 2881331 2881333 00:31:09.386 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:09.386 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:09.386 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.386 16:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:09.386 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:09.386 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:09.647 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:09.908 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.169 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.429 16:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.429 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.430 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.430 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.430 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.430 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.430 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:10.690 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.951 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:10.952 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.215 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.478 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.738 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:11.998 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.258 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.259 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.519 16:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.519 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.780 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.039 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.040 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:13.299 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.300 16:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.559 rmmod nvme_tcp 00:31:13.559 rmmod nvme_fabrics 00:31:13.559 rmmod nvme_keyring 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2875396 ']' 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2875396 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2875396 ']' 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2875396 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.559 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2875396 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2875396' 00:31:13.819 killing process with pid 2875396 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2875396 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2875396 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.819 16:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.363 00:31:16.363 real 0m49.503s 00:31:16.363 user 3m8.264s 00:31:16.363 sys 0m23.568s 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:16.363 ************************************ 00:31:16.363 END TEST nvmf_ns_hotplug_stress 00:31:16.363 ************************************ 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.363 ************************************ 00:31:16.363 START TEST nvmf_delete_subsystem 00:31:16.363 ************************************ 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:16.363 * Looking for test storage... 00:31:16.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:16.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.363 --rc genhtml_branch_coverage=1 00:31:16.363 --rc genhtml_function_coverage=1 00:31:16.363 --rc genhtml_legend=1 00:31:16.363 --rc geninfo_all_blocks=1 00:31:16.363 --rc geninfo_unexecuted_blocks=1 00:31:16.363 00:31:16.363 ' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:16.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.363 --rc genhtml_branch_coverage=1 00:31:16.363 --rc genhtml_function_coverage=1 00:31:16.363 --rc genhtml_legend=1 00:31:16.363 --rc geninfo_all_blocks=1 00:31:16.363 --rc geninfo_unexecuted_blocks=1 00:31:16.363 00:31:16.363 ' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:16.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.363 --rc genhtml_branch_coverage=1 00:31:16.363 --rc genhtml_function_coverage=1 00:31:16.363 --rc genhtml_legend=1 00:31:16.363 --rc geninfo_all_blocks=1 00:31:16.363 --rc geninfo_unexecuted_blocks=1 00:31:16.363 00:31:16.363 ' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:16.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.363 --rc genhtml_branch_coverage=1 00:31:16.363 --rc genhtml_function_coverage=1 00:31:16.363 --rc genhtml_legend=1 00:31:16.363 --rc geninfo_all_blocks=1 00:31:16.363 --rc geninfo_unexecuted_blocks=1 00:31:16.363 00:31:16.363 ' 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:16.363 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.364 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:24.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:24.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:24.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:24.498 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:24.498 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:24.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:31:24.499 00:31:24.499 --- 10.0.0.2 ping statistics --- 00:31:24.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.499 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:31:24.499 16:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:31:24.499 00:31:24.499 --- 10.0.0.1 ping statistics --- 00:31:24.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.499 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2886248 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2886248 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2886248 ']' 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:24.499 16:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 [2024-10-01 16:56:15.113939] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:24.499 [2024-10-01 16:56:15.115024] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:31:24.499 [2024-10-01 16:56:15.115073] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.499 [2024-10-01 16:56:15.201293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.499 [2024-10-01 16:56:15.293198] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.499 [2024-10-01 16:56:15.293253] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.499 [2024-10-01 16:56:15.293261] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.499 [2024-10-01 16:56:15.293268] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.499 [2024-10-01 16:56:15.293274] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.499 [2024-10-01 16:56:15.293400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.499 [2024-10-01 16:56:15.293405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.499 [2024-10-01 16:56:15.367655] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:24.499 [2024-10-01 16:56:15.367777] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:24.499 [2024-10-01 16:56:15.367906] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 [2024-10-01 16:56:16.058365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.499 [2024-10-01 16:56:16.094675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.499 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.500 NULL1 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.500 Delay0 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2886320 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:24.500 16:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:24.760 [2024-10-01 16:56:16.183580] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:26.668 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.668 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.668 16:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 starting I/O failed: -6 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 [2024-10-01 16:56:18.261503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8d8000c00 is same with the state(6) to be set 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Write completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.668 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 starting I/O failed: -6 00:31:26.669 [2024-10-01 16:56:18.261968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05390 is same with the state(6) to be set 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Write completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 Read completed with error (sct=0, sc=8) 00:31:26.669 [2024-10-01 16:56:18.262353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05750 is same with the state(6) to be set 00:31:27.610 [2024-10-01 16:56:19.239959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06a70 is same with the state(6) to be set 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 [2024-10-01 16:56:19.264994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05930 is same with the state(6) to be set 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 [2024-10-01 16:56:19.265263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8d800d7c0 is same with the state(6) to be set 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Read completed with error (sct=0, sc=8) 00:31:27.610 Write completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 [2024-10-01 16:56:19.265321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05570 is same with the state(6) to be set 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Write completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 Read completed with error (sct=0, sc=8) 00:31:27.611 [2024-10-01 16:56:19.265404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8d800cfe0 is same with the state(6) to be set 00:31:27.611 Initializing NVMe Controllers 00:31:27.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.611 Controller IO queue size 128, less than required. 00:31:27.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:27.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:27.611 Initialization complete. Launching workers. 00:31:27.611 ======================================================== 00:31:27.611 Latency(us) 00:31:27.611 Device Information : IOPS MiB/s Average min max 00:31:27.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 145.19 0.07 958270.30 398.22 1042512.94 00:31:27.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.09 0.08 907713.36 243.91 1041593.87 00:31:27.611 ======================================================== 00:31:27.611 Total : 309.28 0.15 931447.48 243.91 1042512.94 00:31:27.611 00:31:27.611 [2024-10-01 16:56:19.265887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06a70 (9): Bad file descriptor 00:31:27.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:27.611 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.611 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:27.611 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2886320 00:31:27.611 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:28.180 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2886320 00:31:28.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2886320) - No such process 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2886320 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2886320 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2886320 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.181 [2024-10-01 16:56:19.798682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2886931 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:28.181 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:28.440 [2024-10-01 16:56:19.863769] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:28.701 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:28.701 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:28.701 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:29.271 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:29.271 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:29.271 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:29.840 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:29.840 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:29.841 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:30.411 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.411 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:30.411 16:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:30.671 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:30.672 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:30.672 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:31.242 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:31.242 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:31.242 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:31.502 Initializing NVMe Controllers 00:31:31.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.502 Controller IO queue size 128, less than required. 00:31:31.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:31.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:31.502 Initialization complete. Launching workers. 00:31:31.502 ======================================================== 00:31:31.502 Latency(us) 00:31:31.502 Device Information : IOPS MiB/s Average min max 00:31:31.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003701.40 1000213.87 1041125.26 00:31:31.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002717.19 1000215.03 1009755.06 00:31:31.502 ======================================================== 00:31:31.502 Total : 256.00 0.12 1003209.30 1000213.87 1041125.26 00:31:31.502 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2886931 00:31:31.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2886931) - No such process 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2886931 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:31.761 rmmod nvme_tcp 00:31:31.761 rmmod nvme_fabrics 00:31:31.761 rmmod nvme_keyring 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2886248 ']' 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2886248 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2886248 ']' 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2886248 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:31.761 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2886248 00:31:32.021 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2886248' 00:31:32.022 killing process with pid 2886248 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2886248 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2886248 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.022 16:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:34.565 00:31:34.565 real 0m18.123s 00:31:34.565 user 0m26.122s 00:31:34.565 sys 0m7.482s 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:34.565 ************************************ 00:31:34.565 END TEST nvmf_delete_subsystem 00:31:34.565 ************************************ 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:34.565 ************************************ 00:31:34.565 START TEST nvmf_host_management 00:31:34.565 ************************************ 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:34.565 * Looking for test storage... 00:31:34.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:34.565 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.566 --rc genhtml_branch_coverage=1 00:31:34.566 --rc genhtml_function_coverage=1 00:31:34.566 --rc genhtml_legend=1 00:31:34.566 --rc geninfo_all_blocks=1 00:31:34.566 --rc geninfo_unexecuted_blocks=1 00:31:34.566 00:31:34.566 ' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.566 --rc genhtml_branch_coverage=1 00:31:34.566 --rc genhtml_function_coverage=1 00:31:34.566 --rc genhtml_legend=1 00:31:34.566 --rc geninfo_all_blocks=1 00:31:34.566 --rc geninfo_unexecuted_blocks=1 00:31:34.566 00:31:34.566 ' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.566 --rc genhtml_branch_coverage=1 00:31:34.566 --rc genhtml_function_coverage=1 00:31:34.566 --rc genhtml_legend=1 00:31:34.566 --rc geninfo_all_blocks=1 00:31:34.566 --rc geninfo_unexecuted_blocks=1 00:31:34.566 00:31:34.566 ' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.566 --rc genhtml_branch_coverage=1 00:31:34.566 --rc genhtml_function_coverage=1 00:31:34.566 --rc genhtml_legend=1 00:31:34.566 --rc geninfo_all_blocks=1 00:31:34.566 --rc geninfo_unexecuted_blocks=1 00:31:34.566 00:31:34.566 ' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.566 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.567 16:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.567 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:34.567 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:34.567 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:34.567 16:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:42.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:42.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:42.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:42.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.702 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.703 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.703 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.703 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.703 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.703 16:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:31:42.703 00:31:42.703 --- 10.0.0.2 ping statistics --- 00:31:42.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.703 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:31:42.703 00:31:42.703 --- 10.0.0.1 ping statistics --- 00:31:42.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.703 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2891449 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2891449 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2891449 ']' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 [2024-10-01 16:56:33.312623] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:42.703 [2024-10-01 16:56:33.313470] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:31:42.703 [2024-10-01 16:56:33.313508] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.703 [2024-10-01 16:56:33.366654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.703 [2024-10-01 16:56:33.424546] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.703 [2024-10-01 16:56:33.424579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.703 [2024-10-01 16:56:33.424585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.703 [2024-10-01 16:56:33.424593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.703 [2024-10-01 16:56:33.424597] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.703 [2024-10-01 16:56:33.424702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.703 [2024-10-01 16:56:33.424835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.703 [2024-10-01 16:56:33.424958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:42.703 [2024-10-01 16:56:33.424960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.703 [2024-10-01 16:56:33.482521] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:42.703 [2024-10-01 16:56:33.482624] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:42.703 [2024-10-01 16:56:33.482742] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:42.703 [2024-10-01 16:56:33.482937] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:42.703 [2024-10-01 16:56:33.483122] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 [2024-10-01 16:56:33.549351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 Malloc0 00:31:42.703 [2024-10-01 16:56:33.613495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2891507 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2891507 /var/tmp/bdevperf.sock 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2891507 ']' 00:31:42.703 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:42.704 { 00:31:42.704 "params": { 00:31:42.704 "name": "Nvme$subsystem", 00:31:42.704 "trtype": "$TEST_TRANSPORT", 00:31:42.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.704 "adrfam": "ipv4", 00:31:42.704 "trsvcid": "$NVMF_PORT", 00:31:42.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.704 "hdgst": ${hdgst:-false}, 00:31:42.704 "ddgst": ${ddgst:-false} 00:31:42.704 }, 00:31:42.704 "method": "bdev_nvme_attach_controller" 00:31:42.704 } 00:31:42.704 EOF 00:31:42.704 )") 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:42.704 16:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:42.704 "params": { 00:31:42.704 "name": "Nvme0", 00:31:42.704 "trtype": "tcp", 00:31:42.704 "traddr": "10.0.0.2", 00:31:42.704 "adrfam": "ipv4", 00:31:42.704 "trsvcid": "4420", 00:31:42.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.704 "hdgst": false, 00:31:42.704 "ddgst": false 00:31:42.704 }, 00:31:42.704 "method": "bdev_nvme_attach_controller" 00:31:42.704 }' 00:31:42.704 [2024-10-01 16:56:33.719142] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:31:42.704 [2024-10-01 16:56:33.719191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891507 ] 00:31:42.704 [2024-10-01 16:56:33.796053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.704 [2024-10-01 16:56:33.858679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.704 Running I/O for 10 seconds... 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.963 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:43.225 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.226 [2024-10-01 16:56:34.689328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.689672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e68e0 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.690264] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:43.226 [2024-10-01 16:56:34.696185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.226 [2024-10-01 16:56:34.696202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.226 [2024-10-01 16:56:34.696211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.226 [2024-10-01 16:56:34.696218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.226 [2024-10-01 16:56:34.696226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.226 [2024-10-01 16:56:34.696232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.226 [2024-10-01 16:56:34.696240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.226 [2024-10-01 16:56:34.696247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.226 [2024-10-01 16:56:34.696254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3d60 is same with the state(6) to be set 00:31:43.226 [2024-10-01 16:56:34.706196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe3d60 (9): Bad file descriptor 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.226 16:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:43.227 [2024-10-01 16:56:34.716254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.227 [2024-10-01 16:56:34.716882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.227 [2024-10-01 16:56:34.716889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.716988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.716997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.228 [2024-10-01 16:56:34.717268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.228 [2024-10-01 16:56:34.717276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc140 is same with the state(6) to be set 00:31:43.228 task offset: 0 on job bdev=Nvme0n1 fails 00:31:43.228 00:31:43.228 Latency(us) 00:31:43.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:43.228 Job: Nvme0n1 ended in about 0.68 seconds with error 00:31:43.228 Verification LBA range: start 0x0 length 0x400 00:31:43.228 Nvme0n1 : 0.68 1506.61 94.16 94.16 0.00 39197.93 7612.26 39321.60 00:31:43.228 =================================================================================================================== 00:31:43.228 Total : 1506.61 94.16 94.16 0.00 39197.93 7612.26 39321.60 00:31:43.228 [2024-10-01 16:56:34.720278] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:43.228 [2024-10-01 16:56:34.720306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:43.228 [2024-10-01 16:56:34.725793] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:44.167 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2891507 00:31:44.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2891507) - No such process 00:31:44.167 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:44.167 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:44.168 { 00:31:44.168 "params": { 00:31:44.168 "name": "Nvme$subsystem", 00:31:44.168 "trtype": "$TEST_TRANSPORT", 00:31:44.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.168 "adrfam": "ipv4", 00:31:44.168 "trsvcid": "$NVMF_PORT", 00:31:44.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.168 "hdgst": ${hdgst:-false}, 00:31:44.168 "ddgst": ${ddgst:-false} 00:31:44.168 }, 00:31:44.168 "method": "bdev_nvme_attach_controller" 00:31:44.168 } 00:31:44.168 EOF 00:31:44.168 )") 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:44.168 16:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:44.168 "params": { 00:31:44.168 "name": "Nvme0", 00:31:44.168 "trtype": "tcp", 00:31:44.168 "traddr": "10.0.0.2", 00:31:44.168 "adrfam": "ipv4", 00:31:44.168 "trsvcid": "4420", 00:31:44.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.168 "hdgst": false, 00:31:44.168 "ddgst": false 00:31:44.168 }, 00:31:44.168 "method": "bdev_nvme_attach_controller" 00:31:44.168 }' 00:31:44.168 [2024-10-01 16:56:35.765503] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:31:44.168 [2024-10-01 16:56:35.765553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891845 ] 00:31:44.168 [2024-10-01 16:56:35.841575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.428 [2024-10-01 16:56:35.903198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.688 Running I/O for 1 seconds... 00:31:45.629 2457.00 IOPS, 153.56 MiB/s 00:31:45.629 Latency(us) 00:31:45.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.629 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:45.629 Verification LBA range: start 0x0 length 0x400 00:31:45.629 Nvme0n1 : 1.01 2499.19 156.20 0.00 0.00 25048.56 1575.38 27827.59 00:31:45.629 =================================================================================================================== 00:31:45.629 Total : 2499.19 156.20 0.00 0.00 25048.56 1575.38 27827.59 00:31:45.629 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:45.629 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:45.629 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.889 rmmod nvme_tcp 00:31:45.889 rmmod nvme_fabrics 00:31:45.889 rmmod nvme_keyring 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2891449 ']' 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2891449 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2891449 ']' 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2891449 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:45.889 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2891449 00:31:45.890 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:45.890 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:45.890 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2891449' 00:31:45.890 killing process with pid 2891449 00:31:45.890 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2891449 00:31:45.890 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2891449 00:31:45.890 [2024-10-01 16:56:37.560473] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.150 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.151 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.151 16:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:48.090 00:31:48.090 real 0m13.878s 00:31:48.090 user 0m19.715s 00:31:48.090 sys 0m7.277s 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:48.090 ************************************ 00:31:48.090 END TEST nvmf_host_management 00:31:48.090 ************************************ 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.090 ************************************ 00:31:48.090 START TEST nvmf_lvol 00:31:48.090 ************************************ 00:31:48.090 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:48.351 * Looking for test storage... 00:31:48.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:48.351 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.352 --rc genhtml_branch_coverage=1 00:31:48.352 --rc genhtml_function_coverage=1 00:31:48.352 --rc genhtml_legend=1 00:31:48.352 --rc geninfo_all_blocks=1 00:31:48.352 --rc geninfo_unexecuted_blocks=1 00:31:48.352 00:31:48.352 ' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.352 --rc genhtml_branch_coverage=1 00:31:48.352 --rc genhtml_function_coverage=1 00:31:48.352 --rc genhtml_legend=1 00:31:48.352 --rc geninfo_all_blocks=1 00:31:48.352 --rc geninfo_unexecuted_blocks=1 00:31:48.352 00:31:48.352 ' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.352 --rc genhtml_branch_coverage=1 00:31:48.352 --rc genhtml_function_coverage=1 00:31:48.352 --rc genhtml_legend=1 00:31:48.352 --rc geninfo_all_blocks=1 00:31:48.352 --rc geninfo_unexecuted_blocks=1 00:31:48.352 00:31:48.352 ' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.352 --rc genhtml_branch_coverage=1 00:31:48.352 --rc genhtml_function_coverage=1 00:31:48.352 --rc genhtml_legend=1 00:31:48.352 --rc geninfo_all_blocks=1 00:31:48.352 --rc geninfo_unexecuted_blocks=1 00:31:48.352 00:31:48.352 ' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.352 16:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:56.489 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:56.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:56.490 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:56.490 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:56.490 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:31:56.490 00:31:56.490 --- 10.0.0.2 ping statistics --- 00:31:56.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.490 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:31:56.490 00:31:56.490 --- 10.0.0.1 ping statistics --- 00:31:56.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.490 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:56.490 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2896246 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2896246 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2896246 ']' 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.491 16:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:56.491 [2024-10-01 16:56:47.610878] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.491 [2024-10-01 16:56:47.611985] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:31:56.491 [2024-10-01 16:56:47.612037] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.491 [2024-10-01 16:56:47.700041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.491 [2024-10-01 16:56:47.762023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.491 [2024-10-01 16:56:47.762063] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.491 [2024-10-01 16:56:47.762070] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.491 [2024-10-01 16:56:47.762076] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.491 [2024-10-01 16:56:47.762082] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.491 [2024-10-01 16:56:47.762209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.491 [2024-10-01 16:56:47.762378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.491 [2024-10-01 16:56:47.762382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.491 [2024-10-01 16:56:47.820229] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.491 [2024-10-01 16:56:47.820328] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.491 [2024-10-01 16:56:47.820422] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:56.491 [2024-10-01 16:56:47.820550] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:57.060 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.060 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:57.060 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:57.060 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:57.060 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:57.061 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.061 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:57.061 [2024-10-01 16:56:48.683234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.061 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.321 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:57.321 16:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:57.580 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:57.580 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:57.840 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:57.840 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=39416db8-9e16-438b-8577-0fabfd4064db 00:31:57.840 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39416db8-9e16-438b-8577-0fabfd4064db lvol 20 00:31:58.143 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3586f6dd-d12c-45c6-abdd-d73b29fc67e5 00:31:58.143 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:58.481 16:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3586f6dd-d12c-45c6-abdd-d73b29fc67e5 00:31:58.481 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:58.800 [2024-10-01 16:56:50.207042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.800 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:58.800 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2896671 00:31:58.800 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:58.800 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:59.738 16:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3586f6dd-d12c-45c6-abdd-d73b29fc67e5 MY_SNAPSHOT 00:31:59.997 16:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=aaf7f86f-8f45-4452-8a6d-424ad4a51cf1 00:31:59.997 16:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3586f6dd-d12c-45c6-abdd-d73b29fc67e5 30 00:32:00.256 16:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone aaf7f86f-8f45-4452-8a6d-424ad4a51cf1 MY_CLONE 00:32:00.516 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aa7c8d63-a27a-486f-a837-f4e98e263c73 00:32:00.516 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aa7c8d63-a27a-486f-a837-f4e98e263c73 00:32:01.085 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2896671 00:32:09.211 Initializing NVMe Controllers 00:32:09.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:09.211 Controller IO queue size 128, less than required. 00:32:09.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:09.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:09.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:09.211 Initialization complete. Launching workers. 00:32:09.211 ======================================================== 00:32:09.211 Latency(us) 00:32:09.211 Device Information : IOPS MiB/s Average min max 00:32:09.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17486.57 68.31 7323.69 537.15 52159.87 00:32:09.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13329.13 52.07 9605.30 958.09 55983.33 00:32:09.212 ======================================================== 00:32:09.212 Total : 30815.70 120.37 8310.58 537.15 55983.33 00:32:09.212 00:32:09.212 16:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:09.470 16:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3586f6dd-d12c-45c6-abdd-d73b29fc67e5 00:32:09.470 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39416db8-9e16-438b-8577-0fabfd4064db 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.730 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.730 rmmod nvme_tcp 00:32:09.730 rmmod nvme_fabrics 00:32:09.730 rmmod nvme_keyring 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2896246 ']' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2896246 ']' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2896246' 00:32:09.990 killing process with pid 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2896246 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.990 16:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:12.528 00:32:12.528 real 0m23.987s 00:32:12.528 user 0m56.320s 00:32:12.528 sys 0m10.647s 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:12.528 ************************************ 00:32:12.528 END TEST nvmf_lvol 00:32:12.528 ************************************ 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:12.528 ************************************ 00:32:12.528 START TEST nvmf_lvs_grow 00:32:12.528 ************************************ 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:12.528 * Looking for test storage... 00:32:12.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.528 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.529 --rc genhtml_branch_coverage=1 00:32:12.529 --rc genhtml_function_coverage=1 00:32:12.529 --rc genhtml_legend=1 00:32:12.529 --rc geninfo_all_blocks=1 00:32:12.529 --rc geninfo_unexecuted_blocks=1 00:32:12.529 00:32:12.529 ' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.529 --rc genhtml_branch_coverage=1 00:32:12.529 --rc genhtml_function_coverage=1 00:32:12.529 --rc genhtml_legend=1 00:32:12.529 --rc geninfo_all_blocks=1 00:32:12.529 --rc geninfo_unexecuted_blocks=1 00:32:12.529 00:32:12.529 ' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.529 --rc genhtml_branch_coverage=1 00:32:12.529 --rc genhtml_function_coverage=1 00:32:12.529 --rc genhtml_legend=1 00:32:12.529 --rc geninfo_all_blocks=1 00:32:12.529 --rc geninfo_unexecuted_blocks=1 00:32:12.529 00:32:12.529 ' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:12.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.529 --rc genhtml_branch_coverage=1 00:32:12.529 --rc genhtml_function_coverage=1 00:32:12.529 --rc genhtml_legend=1 00:32:12.529 --rc geninfo_all_blocks=1 00:32:12.529 --rc geninfo_unexecuted_blocks=1 00:32:12.529 00:32:12.529 ' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.529 16:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:12.529 16:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:19.111 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:19.111 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:19.111 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:19.111 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.111 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.371 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.371 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.371 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.371 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.371 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.371 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.371 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:32:19.632 00:32:19.632 --- 10.0.0.2 ping statistics --- 00:32:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.632 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:32:19.632 00:32:19.632 --- 10.0.0.1 ping statistics --- 00:32:19.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.632 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2902744 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2902744 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2902744 ']' 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.632 16:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:19.632 [2024-10-01 16:57:11.192575] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:19.632 [2024-10-01 16:57:11.194038] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:32:19.632 [2024-10-01 16:57:11.194105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.632 [2024-10-01 16:57:11.279009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.892 [2024-10-01 16:57:11.339977] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.892 [2024-10-01 16:57:11.340013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.892 [2024-10-01 16:57:11.340020] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.892 [2024-10-01 16:57:11.340027] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.892 [2024-10-01 16:57:11.340032] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.892 [2024-10-01 16:57:11.340051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.892 [2024-10-01 16:57:11.391937] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:19.892 [2024-10-01 16:57:11.392180] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.463 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:20.724 [2024-10-01 16:57:12.300775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:20.724 ************************************ 00:32:20.724 START TEST lvs_grow_clean 00:32:20.724 ************************************ 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:20.724 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:20.984 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:20.984 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:21.245 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:21.245 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:21.245 16:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:21.505 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:21.505 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:21.505 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 lvol 150 00:32:21.766 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a43e0be1-5641-4480-a284-f11adc62de32 00:32:21.766 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:21.766 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:22.027 [2024-10-01 16:57:13.524495] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:22.027 [2024-10-01 16:57:13.524658] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:22.027 true 00:32:22.027 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:22.027 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:22.287 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:22.287 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:22.287 16:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a43e0be1-5641-4480-a284-f11adc62de32 00:32:22.546 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.805 [2024-10-01 16:57:14.341049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.806 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2903617 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2903617 /var/tmp/bdevperf.sock 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2903617 ']' 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:23.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:23.065 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.065 [2024-10-01 16:57:14.612160] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:32:23.065 [2024-10-01 16:57:14.612213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903617 ] 00:32:23.065 [2024-10-01 16:57:14.662262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.066 [2024-10-01 16:57:14.716339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.357 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.357 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:32:23.357 16:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:23.617 Nvme0n1 00:32:23.617 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:23.617 [ 00:32:23.617 { 00:32:23.617 "name": "Nvme0n1", 00:32:23.617 "aliases": [ 00:32:23.617 "a43e0be1-5641-4480-a284-f11adc62de32" 00:32:23.617 ], 00:32:23.617 "product_name": "NVMe disk", 00:32:23.617 "block_size": 4096, 00:32:23.617 "num_blocks": 38912, 00:32:23.617 "uuid": "a43e0be1-5641-4480-a284-f11adc62de32", 00:32:23.617 "numa_id": 0, 00:32:23.617 "assigned_rate_limits": { 00:32:23.617 "rw_ios_per_sec": 0, 00:32:23.617 "rw_mbytes_per_sec": 0, 00:32:23.617 "r_mbytes_per_sec": 0, 00:32:23.617 "w_mbytes_per_sec": 0 00:32:23.617 }, 00:32:23.617 "claimed": false, 00:32:23.617 "zoned": false, 00:32:23.617 "supported_io_types": { 00:32:23.617 "read": true, 00:32:23.617 "write": true, 00:32:23.617 "unmap": true, 00:32:23.617 "flush": true, 00:32:23.617 "reset": true, 00:32:23.617 "nvme_admin": true, 00:32:23.617 "nvme_io": true, 00:32:23.617 "nvme_io_md": false, 00:32:23.617 "write_zeroes": true, 00:32:23.617 "zcopy": false, 00:32:23.617 "get_zone_info": false, 00:32:23.617 "zone_management": false, 00:32:23.617 "zone_append": false, 00:32:23.617 "compare": true, 00:32:23.617 "compare_and_write": true, 00:32:23.617 "abort": true, 00:32:23.617 "seek_hole": false, 00:32:23.618 "seek_data": false, 00:32:23.618 "copy": true, 00:32:23.618 "nvme_iov_md": false 00:32:23.618 }, 00:32:23.618 "memory_domains": [ 00:32:23.618 { 00:32:23.618 "dma_device_id": "system", 00:32:23.618 "dma_device_type": 1 00:32:23.618 } 00:32:23.618 ], 00:32:23.618 "driver_specific": { 00:32:23.618 "nvme": [ 00:32:23.618 { 00:32:23.618 "trid": { 00:32:23.618 "trtype": "TCP", 00:32:23.618 "adrfam": "IPv4", 00:32:23.618 "traddr": "10.0.0.2", 00:32:23.618 "trsvcid": "4420", 00:32:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:23.618 }, 00:32:23.618 "ctrlr_data": { 00:32:23.618 "cntlid": 1, 00:32:23.618 "vendor_id": "0x8086", 00:32:23.618 "model_number": "SPDK bdev Controller", 00:32:23.618 "serial_number": "SPDK0", 00:32:23.618 "firmware_revision": "25.01", 00:32:23.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.618 "oacs": { 00:32:23.618 "security": 0, 00:32:23.618 "format": 0, 00:32:23.618 "firmware": 0, 00:32:23.618 "ns_manage": 0 00:32:23.618 }, 00:32:23.618 "multi_ctrlr": true, 00:32:23.618 "ana_reporting": false 00:32:23.618 }, 00:32:23.618 "vs": { 00:32:23.618 "nvme_version": "1.3" 00:32:23.618 }, 00:32:23.618 "ns_data": { 00:32:23.618 "id": 1, 00:32:23.618 "can_share": true 00:32:23.618 } 00:32:23.618 } 00:32:23.618 ], 00:32:23.618 "mp_policy": "active_passive" 00:32:23.618 } 00:32:23.618 } 00:32:23.618 ] 00:32:23.618 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2903641 00:32:23.618 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:23.618 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:23.878 Running I/O for 10 seconds... 00:32:24.817 Latency(us) 00:32:24.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.817 Nvme0n1 : 1.00 18930.00 73.95 0.00 0.00 0.00 0.00 0.00 00:32:24.817 =================================================================================================================== 00:32:24.817 Total : 18930.00 73.95 0.00 0.00 0.00 0.00 0.00 00:32:24.817 00:32:25.759 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:25.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.759 Nvme0n1 : 2.00 19225.50 75.10 0.00 0.00 0.00 0.00 0.00 00:32:25.759 =================================================================================================================== 00:32:25.759 Total : 19225.50 75.10 0.00 0.00 0.00 0.00 0.00 00:32:25.759 00:32:26.019 true 00:32:26.019 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:26.019 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:26.280 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:26.280 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:26.280 16:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2903641 00:32:26.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.850 Nvme0n1 : 3.00 19323.33 75.48 0.00 0.00 0.00 0.00 0.00 00:32:26.850 =================================================================================================================== 00:32:26.850 Total : 19323.33 75.48 0.00 0.00 0.00 0.00 0.00 00:32:26.850 00:32:27.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.790 Nvme0n1 : 4.00 19388.50 75.74 0.00 0.00 0.00 0.00 0.00 00:32:27.790 =================================================================================================================== 00:32:27.790 Total : 19388.50 75.74 0.00 0.00 0.00 0.00 0.00 00:32:27.790 00:32:28.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.729 Nvme0n1 : 5.00 19427.60 75.89 0.00 0.00 0.00 0.00 0.00 00:32:28.729 =================================================================================================================== 00:32:28.729 Total : 19427.60 75.89 0.00 0.00 0.00 0.00 0.00 00:32:28.729 00:32:30.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.112 Nvme0n1 : 6.00 19453.67 75.99 0.00 0.00 0.00 0.00 0.00 00:32:30.112 =================================================================================================================== 00:32:30.112 Total : 19453.67 75.99 0.00 0.00 0.00 0.00 0.00 00:32:30.112 00:32:31.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.051 Nvme0n1 : 7.00 19481.57 76.10 0.00 0.00 0.00 0.00 0.00 00:32:31.051 =================================================================================================================== 00:32:31.052 Total : 19481.57 76.10 0.00 0.00 0.00 0.00 0.00 00:32:31.052 00:32:31.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:31.991 Nvme0n1 : 8.00 19502.25 76.18 0.00 0.00 0.00 0.00 0.00 00:32:31.991 =================================================================================================================== 00:32:31.991 Total : 19502.25 76.18 0.00 0.00 0.00 0.00 0.00 00:32:31.991 00:32:32.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.930 Nvme0n1 : 9.00 19518.56 76.24 0.00 0.00 0.00 0.00 0.00 00:32:32.930 =================================================================================================================== 00:32:32.930 Total : 19518.56 76.24 0.00 0.00 0.00 0.00 0.00 00:32:32.930 00:32:33.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.870 Nvme0n1 : 10.00 19530.10 76.29 0.00 0.00 0.00 0.00 0.00 00:32:33.870 =================================================================================================================== 00:32:33.870 Total : 19530.10 76.29 0.00 0.00 0.00 0.00 0.00 00:32:33.870 00:32:33.870 00:32:33.870 Latency(us) 00:32:33.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.871 Nvme0n1 : 10.00 19534.92 76.31 0.00 0.00 6549.77 4083.40 30449.03 00:32:33.871 =================================================================================================================== 00:32:33.871 Total : 19534.92 76.31 0.00 0.00 6549.77 4083.40 30449.03 00:32:33.871 { 00:32:33.871 "results": [ 00:32:33.871 { 00:32:33.871 "job": "Nvme0n1", 00:32:33.871 "core_mask": "0x2", 00:32:33.871 "workload": "randwrite", 00:32:33.871 "status": "finished", 00:32:33.871 "queue_depth": 128, 00:32:33.871 "io_size": 4096, 00:32:33.871 "runtime": 10.004087, 00:32:33.871 "iops": 19534.916079798186, 00:32:33.871 "mibps": 76.30826593671166, 00:32:33.871 "io_failed": 0, 00:32:33.871 "io_timeout": 0, 00:32:33.871 "avg_latency_us": 6549.7698522501, 00:32:33.871 "min_latency_us": 4083.396923076923, 00:32:33.871 "max_latency_us": 30449.033846153845 00:32:33.871 } 00:32:33.871 ], 00:32:33.871 "core_count": 1 00:32:33.871 } 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2903617 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2903617 ']' 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2903617 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2903617 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2903617' 00:32:33.871 killing process with pid 2903617 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2903617 00:32:33.871 Received shutdown signal, test time was about 10.000000 seconds 00:32:33.871 00:32:33.871 Latency(us) 00:32:33.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.871 =================================================================================================================== 00:32:33.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.871 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2903617 00:32:34.131 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.390 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.390 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:34.390 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:34.650 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:34.650 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:34.650 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:34.910 [2024-10-01 16:57:26.464544] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.910 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:34.911 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:35.171 request: 00:32:35.171 { 00:32:35.171 "uuid": "b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7", 00:32:35.171 "method": "bdev_lvol_get_lvstores", 00:32:35.171 "req_id": 1 00:32:35.171 } 00:32:35.171 Got JSON-RPC error response 00:32:35.171 response: 00:32:35.171 { 00:32:35.171 "code": -19, 00:32:35.171 "message": "No such device" 00:32:35.171 } 00:32:35.171 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:35.171 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:35.171 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:35.171 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:35.171 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:35.431 aio_bdev 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a43e0be1-5641-4480-a284-f11adc62de32 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a43e0be1-5641-4480-a284-f11adc62de32 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:35.431 16:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:35.691 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a43e0be1-5641-4480-a284-f11adc62de32 -t 2000 00:32:35.691 [ 00:32:35.691 { 00:32:35.691 "name": "a43e0be1-5641-4480-a284-f11adc62de32", 00:32:35.691 "aliases": [ 00:32:35.691 "lvs/lvol" 00:32:35.691 ], 00:32:35.691 "product_name": "Logical Volume", 00:32:35.691 "block_size": 4096, 00:32:35.691 "num_blocks": 38912, 00:32:35.691 "uuid": "a43e0be1-5641-4480-a284-f11adc62de32", 00:32:35.691 "assigned_rate_limits": { 00:32:35.691 "rw_ios_per_sec": 0, 00:32:35.691 "rw_mbytes_per_sec": 0, 00:32:35.691 "r_mbytes_per_sec": 0, 00:32:35.691 "w_mbytes_per_sec": 0 00:32:35.691 }, 00:32:35.691 "claimed": false, 00:32:35.691 "zoned": false, 00:32:35.691 "supported_io_types": { 00:32:35.691 "read": true, 00:32:35.691 "write": true, 00:32:35.691 "unmap": true, 00:32:35.691 "flush": false, 00:32:35.691 "reset": true, 00:32:35.691 "nvme_admin": false, 00:32:35.691 "nvme_io": false, 00:32:35.691 "nvme_io_md": false, 00:32:35.691 "write_zeroes": true, 00:32:35.691 "zcopy": false, 00:32:35.691 "get_zone_info": false, 00:32:35.691 "zone_management": false, 00:32:35.691 "zone_append": false, 00:32:35.691 "compare": false, 00:32:35.691 "compare_and_write": false, 00:32:35.691 "abort": false, 00:32:35.691 "seek_hole": true, 00:32:35.691 "seek_data": true, 00:32:35.691 "copy": false, 00:32:35.691 "nvme_iov_md": false 00:32:35.691 }, 00:32:35.691 "driver_specific": { 00:32:35.691 "lvol": { 00:32:35.691 "lvol_store_uuid": "b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7", 00:32:35.691 "base_bdev": "aio_bdev", 00:32:35.691 "thin_provision": false, 00:32:35.691 "num_allocated_clusters": 38, 00:32:35.691 "snapshot": false, 00:32:35.691 "clone": false, 00:32:35.691 "esnap_clone": false 00:32:35.691 } 00:32:35.691 } 00:32:35.691 } 00:32:35.691 ] 00:32:35.691 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:32:35.691 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:35.691 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:35.951 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:35.951 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:35.951 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:36.211 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:36.211 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a43e0be1-5641-4480-a284-f11adc62de32 00:32:36.472 16:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0d6ad78-46ce-4aa4-a5f1-4438fd65d1a7 00:32:36.732 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:36.732 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:36.992 00:32:36.992 real 0m16.066s 00:32:36.992 user 0m15.770s 00:32:36.992 sys 0m1.421s 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:36.992 ************************************ 00:32:36.992 END TEST lvs_grow_clean 00:32:36.992 ************************************ 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:36.992 ************************************ 00:32:36.992 START TEST lvs_grow_dirty 00:32:36.992 ************************************ 00:32:36.992 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:36.993 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:37.254 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:37.254 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:37.517 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:37.517 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:37.517 16:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u caad2227-3d4a-46a1-b79c-223a6fa88453 lvol 150 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:37.781 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:38.041 [2024-10-01 16:57:29.608466] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:38.041 [2024-10-01 16:57:29.608607] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:38.041 true 00:32:38.041 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:38.041 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:38.300 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:38.300 16:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:38.560 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:38.820 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.820 [2024-10-01 16:57:30.444965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.820 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:39.080 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2906230 00:32:39.080 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2906230 /var/tmp/bdevperf.sock 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2906230 ']' 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.081 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:39.081 [2024-10-01 16:57:30.702124] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:32:39.081 [2024-10-01 16:57:30.702175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906230 ] 00:32:39.081 [2024-10-01 16:57:30.753126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.340 [2024-10-01 16:57:30.808000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.340 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.340 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:39.340 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:39.601 Nvme0n1 00:32:39.601 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:39.861 [ 00:32:39.861 { 00:32:39.861 "name": "Nvme0n1", 00:32:39.861 "aliases": [ 00:32:39.861 "18e16979-eea1-41e5-ba37-4d479dc498ac" 00:32:39.861 ], 00:32:39.861 "product_name": "NVMe disk", 00:32:39.861 "block_size": 4096, 00:32:39.861 "num_blocks": 38912, 00:32:39.861 "uuid": "18e16979-eea1-41e5-ba37-4d479dc498ac", 00:32:39.861 "numa_id": 0, 00:32:39.861 "assigned_rate_limits": { 00:32:39.861 "rw_ios_per_sec": 0, 00:32:39.861 "rw_mbytes_per_sec": 0, 00:32:39.861 "r_mbytes_per_sec": 0, 00:32:39.861 "w_mbytes_per_sec": 0 00:32:39.861 }, 00:32:39.861 "claimed": false, 00:32:39.861 "zoned": false, 00:32:39.861 "supported_io_types": { 00:32:39.861 "read": true, 00:32:39.861 "write": true, 00:32:39.861 "unmap": true, 00:32:39.861 "flush": true, 00:32:39.861 "reset": true, 00:32:39.861 "nvme_admin": true, 00:32:39.861 "nvme_io": true, 00:32:39.861 "nvme_io_md": false, 00:32:39.861 "write_zeroes": true, 00:32:39.861 "zcopy": false, 00:32:39.861 "get_zone_info": false, 00:32:39.861 "zone_management": false, 00:32:39.861 "zone_append": false, 00:32:39.861 "compare": true, 00:32:39.861 "compare_and_write": true, 00:32:39.861 "abort": true, 00:32:39.861 "seek_hole": false, 00:32:39.861 "seek_data": false, 00:32:39.861 "copy": true, 00:32:39.861 "nvme_iov_md": false 00:32:39.861 }, 00:32:39.861 "memory_domains": [ 00:32:39.861 { 00:32:39.861 "dma_device_id": "system", 00:32:39.861 "dma_device_type": 1 00:32:39.861 } 00:32:39.861 ], 00:32:39.861 "driver_specific": { 00:32:39.861 "nvme": [ 00:32:39.861 { 00:32:39.861 "trid": { 00:32:39.861 "trtype": "TCP", 00:32:39.861 "adrfam": "IPv4", 00:32:39.861 "traddr": "10.0.0.2", 00:32:39.861 "trsvcid": "4420", 00:32:39.861 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:39.861 }, 00:32:39.861 "ctrlr_data": { 00:32:39.861 "cntlid": 1, 00:32:39.861 "vendor_id": "0x8086", 00:32:39.861 "model_number": "SPDK bdev Controller", 00:32:39.861 "serial_number": "SPDK0", 00:32:39.861 "firmware_revision": "25.01", 00:32:39.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.861 "oacs": { 00:32:39.861 "security": 0, 00:32:39.861 "format": 0, 00:32:39.861 "firmware": 0, 00:32:39.861 "ns_manage": 0 00:32:39.861 }, 00:32:39.861 "multi_ctrlr": true, 00:32:39.862 "ana_reporting": false 00:32:39.862 }, 00:32:39.862 "vs": { 00:32:39.862 "nvme_version": "1.3" 00:32:39.862 }, 00:32:39.862 "ns_data": { 00:32:39.862 "id": 1, 00:32:39.862 "can_share": true 00:32:39.862 } 00:32:39.862 } 00:32:39.862 ], 00:32:39.862 "mp_policy": "active_passive" 00:32:39.862 } 00:32:39.862 } 00:32:39.862 ] 00:32:39.862 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2906415 00:32:39.862 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:39.862 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:39.862 Running I/O for 10 seconds... 00:32:40.800 Latency(us) 00:32:40.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.800 Nvme0n1 : 1.00 18874.00 73.73 0.00 0.00 0.00 0.00 0.00 00:32:40.800 =================================================================================================================== 00:32:40.800 Total : 18874.00 73.73 0.00 0.00 0.00 0.00 0.00 00:32:40.800 00:32:41.739 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:41.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.998 Nvme0n1 : 2.00 19165.00 74.86 0.00 0.00 0.00 0.00 0.00 00:32:41.998 =================================================================================================================== 00:32:41.998 Total : 19165.00 74.86 0.00 0.00 0.00 0.00 0.00 00:32:41.998 00:32:41.998 true 00:32:41.998 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:41.998 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:42.258 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:42.258 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:42.258 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2906415 00:32:42.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.828 Nvme0n1 : 3.00 19262.00 75.24 0.00 0.00 0.00 0.00 0.00 00:32:42.828 =================================================================================================================== 00:32:42.828 Total : 19262.00 75.24 0.00 0.00 0.00 0.00 0.00 00:32:42.828 00:32:44.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.207 Nvme0n1 : 4.00 19326.75 75.50 0.00 0.00 0.00 0.00 0.00 00:32:44.207 =================================================================================================================== 00:32:44.207 Total : 19326.75 75.50 0.00 0.00 0.00 0.00 0.00 00:32:44.207 00:32:45.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.146 Nvme0n1 : 5.00 19365.20 75.65 0.00 0.00 0.00 0.00 0.00 00:32:45.146 =================================================================================================================== 00:32:45.146 Total : 19365.20 75.65 0.00 0.00 0.00 0.00 0.00 00:32:45.146 00:32:46.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.085 Nvme0n1 : 6.00 19401.67 75.79 0.00 0.00 0.00 0.00 0.00 00:32:46.085 =================================================================================================================== 00:32:46.085 Total : 19401.67 75.79 0.00 0.00 0.00 0.00 0.00 00:32:46.085 00:32:47.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.025 Nvme0n1 : 7.00 19427.71 75.89 0.00 0.00 0.00 0.00 0.00 00:32:47.025 =================================================================================================================== 00:32:47.025 Total : 19427.71 75.89 0.00 0.00 0.00 0.00 0.00 00:32:47.025 00:32:47.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.964 Nvme0n1 : 8.00 19447.25 75.97 0.00 0.00 0.00 0.00 0.00 00:32:47.964 =================================================================================================================== 00:32:47.964 Total : 19447.25 75.97 0.00 0.00 0.00 0.00 0.00 00:32:47.964 00:32:48.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.904 Nvme0n1 : 9.00 19457.11 76.00 0.00 0.00 0.00 0.00 0.00 00:32:48.904 =================================================================================================================== 00:32:48.904 Total : 19457.11 76.00 0.00 0.00 0.00 0.00 0.00 00:32:48.904 00:32:49.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.936 Nvme0n1 : 10.00 19474.60 76.07 0.00 0.00 0.00 0.00 0.00 00:32:49.936 =================================================================================================================== 00:32:49.936 Total : 19474.60 76.07 0.00 0.00 0.00 0.00 0.00 00:32:49.936 00:32:49.936 00:32:49.936 Latency(us) 00:32:49.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.936 Nvme0n1 : 10.01 19476.69 76.08 0.00 0.00 6568.43 4209.43 30650.68 00:32:49.936 =================================================================================================================== 00:32:49.936 Total : 19476.69 76.08 0.00 0.00 6568.43 4209.43 30650.68 00:32:49.936 { 00:32:49.936 "results": [ 00:32:49.936 { 00:32:49.936 "job": "Nvme0n1", 00:32:49.936 "core_mask": "0x2", 00:32:49.936 "workload": "randwrite", 00:32:49.936 "status": "finished", 00:32:49.936 "queue_depth": 128, 00:32:49.936 "io_size": 4096, 00:32:49.936 "runtime": 10.005499, 00:32:49.936 "iops": 19476.689768296415, 00:32:49.936 "mibps": 76.08081940740787, 00:32:49.936 "io_failed": 0, 00:32:49.936 "io_timeout": 0, 00:32:49.936 "avg_latency_us": 6568.426921063788, 00:32:49.936 "min_latency_us": 4209.427692307692, 00:32:49.936 "max_latency_us": 30650.683076923076 00:32:49.936 } 00:32:49.936 ], 00:32:49.936 "core_count": 1 00:32:49.936 } 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2906230 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2906230 ']' 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2906230 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2906230 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2906230' 00:32:49.936 killing process with pid 2906230 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2906230 00:32:49.936 Received shutdown signal, test time was about 10.000000 seconds 00:32:49.936 00:32:49.936 Latency(us) 00:32:49.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.936 =================================================================================================================== 00:32:49.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.936 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2906230 00:32:50.219 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:50.488 16:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:50.488 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:50.488 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2902744 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2902744 00:32:50.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2902744 Killed "${NVMF_APP[@]}" "$@" 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2908250 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2908250 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2908250 ']' 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.748 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:51.008 [2024-10-01 16:57:42.449978] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.008 [2024-10-01 16:57:42.450888] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:32:51.008 [2024-10-01 16:57:42.450928] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.008 [2024-10-01 16:57:42.533279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.008 [2024-10-01 16:57:42.595080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.008 [2024-10-01 16:57:42.595117] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.008 [2024-10-01 16:57:42.595124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.008 [2024-10-01 16:57:42.595130] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.008 [2024-10-01 16:57:42.595135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.008 [2024-10-01 16:57:42.595159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.008 [2024-10-01 16:57:42.647213] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.008 [2024-10-01 16:57:42.647445] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.008 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.008 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:51.008 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:51.008 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.008 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:51.269 [2024-10-01 16:57:42.869198] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:51.269 [2024-10-01 16:57:42.869421] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:51.269 [2024-10-01 16:57:42.869510] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:51.269 16:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:51.529 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18e16979-eea1-41e5-ba37-4d479dc498ac -t 2000 00:32:51.789 [ 00:32:51.789 { 00:32:51.789 "name": "18e16979-eea1-41e5-ba37-4d479dc498ac", 00:32:51.789 "aliases": [ 00:32:51.789 "lvs/lvol" 00:32:51.789 ], 00:32:51.789 "product_name": "Logical Volume", 00:32:51.789 "block_size": 4096, 00:32:51.789 "num_blocks": 38912, 00:32:51.789 "uuid": "18e16979-eea1-41e5-ba37-4d479dc498ac", 00:32:51.789 "assigned_rate_limits": { 00:32:51.789 "rw_ios_per_sec": 0, 00:32:51.789 "rw_mbytes_per_sec": 0, 00:32:51.789 "r_mbytes_per_sec": 0, 00:32:51.789 "w_mbytes_per_sec": 0 00:32:51.789 }, 00:32:51.789 "claimed": false, 00:32:51.789 "zoned": false, 00:32:51.789 "supported_io_types": { 00:32:51.789 "read": true, 00:32:51.789 "write": true, 00:32:51.789 "unmap": true, 00:32:51.789 "flush": false, 00:32:51.789 "reset": true, 00:32:51.789 "nvme_admin": false, 00:32:51.789 "nvme_io": false, 00:32:51.789 "nvme_io_md": false, 00:32:51.789 "write_zeroes": true, 00:32:51.789 "zcopy": false, 00:32:51.789 "get_zone_info": false, 00:32:51.789 "zone_management": false, 00:32:51.789 "zone_append": false, 00:32:51.789 "compare": false, 00:32:51.789 "compare_and_write": false, 00:32:51.789 "abort": false, 00:32:51.789 "seek_hole": true, 00:32:51.789 "seek_data": true, 00:32:51.789 "copy": false, 00:32:51.789 "nvme_iov_md": false 00:32:51.789 }, 00:32:51.789 "driver_specific": { 00:32:51.789 "lvol": { 00:32:51.789 "lvol_store_uuid": "caad2227-3d4a-46a1-b79c-223a6fa88453", 00:32:51.789 "base_bdev": "aio_bdev", 00:32:51.789 "thin_provision": false, 00:32:51.789 "num_allocated_clusters": 38, 00:32:51.789 "snapshot": false, 00:32:51.789 "clone": false, 00:32:51.789 "esnap_clone": false 00:32:51.789 } 00:32:51.789 } 00:32:51.789 } 00:32:51.789 ] 00:32:51.789 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:51.789 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:51.789 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:52.049 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:52.049 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:52.049 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:52.310 [2024-10-01 16:57:43.923669] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:52.310 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:52.570 request: 00:32:52.570 { 00:32:52.570 "uuid": "caad2227-3d4a-46a1-b79c-223a6fa88453", 00:32:52.570 "method": "bdev_lvol_get_lvstores", 00:32:52.570 "req_id": 1 00:32:52.570 } 00:32:52.570 Got JSON-RPC error response 00:32:52.570 response: 00:32:52.570 { 00:32:52.570 "code": -19, 00:32:52.570 "message": "No such device" 00:32:52.570 } 00:32:52.570 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:52.570 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:52.570 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:52.570 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:52.570 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:52.830 aio_bdev 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:52.830 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:53.090 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 18e16979-eea1-41e5-ba37-4d479dc498ac -t 2000 00:32:53.350 [ 00:32:53.350 { 00:32:53.350 "name": "18e16979-eea1-41e5-ba37-4d479dc498ac", 00:32:53.350 "aliases": [ 00:32:53.350 "lvs/lvol" 00:32:53.350 ], 00:32:53.350 "product_name": "Logical Volume", 00:32:53.350 "block_size": 4096, 00:32:53.350 "num_blocks": 38912, 00:32:53.350 "uuid": "18e16979-eea1-41e5-ba37-4d479dc498ac", 00:32:53.350 "assigned_rate_limits": { 00:32:53.350 "rw_ios_per_sec": 0, 00:32:53.350 "rw_mbytes_per_sec": 0, 00:32:53.350 "r_mbytes_per_sec": 0, 00:32:53.350 "w_mbytes_per_sec": 0 00:32:53.350 }, 00:32:53.350 "claimed": false, 00:32:53.350 "zoned": false, 00:32:53.350 "supported_io_types": { 00:32:53.350 "read": true, 00:32:53.350 "write": true, 00:32:53.350 "unmap": true, 00:32:53.350 "flush": false, 00:32:53.350 "reset": true, 00:32:53.350 "nvme_admin": false, 00:32:53.350 "nvme_io": false, 00:32:53.350 "nvme_io_md": false, 00:32:53.350 "write_zeroes": true, 00:32:53.350 "zcopy": false, 00:32:53.350 "get_zone_info": false, 00:32:53.350 "zone_management": false, 00:32:53.350 "zone_append": false, 00:32:53.350 "compare": false, 00:32:53.350 "compare_and_write": false, 00:32:53.350 "abort": false, 00:32:53.350 "seek_hole": true, 00:32:53.350 "seek_data": true, 00:32:53.350 "copy": false, 00:32:53.350 "nvme_iov_md": false 00:32:53.350 }, 00:32:53.350 "driver_specific": { 00:32:53.350 "lvol": { 00:32:53.350 "lvol_store_uuid": "caad2227-3d4a-46a1-b79c-223a6fa88453", 00:32:53.350 "base_bdev": "aio_bdev", 00:32:53.350 "thin_provision": false, 00:32:53.350 "num_allocated_clusters": 38, 00:32:53.350 "snapshot": false, 00:32:53.350 "clone": false, 00:32:53.350 "esnap_clone": false 00:32:53.350 } 00:32:53.350 } 00:32:53.350 } 00:32:53.350 ] 00:32:53.350 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:53.350 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:53.350 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:53.350 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:53.350 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:53.350 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:53.610 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:53.610 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 18e16979-eea1-41e5-ba37-4d479dc498ac 00:32:53.869 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u caad2227-3d4a-46a1-b79c-223a6fa88453 00:32:54.129 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:54.390 00:32:54.390 real 0m17.410s 00:32:54.390 user 0m35.796s 00:32:54.390 sys 0m3.071s 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:54.390 ************************************ 00:32:54.390 END TEST lvs_grow_dirty 00:32:54.390 ************************************ 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:54.390 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:54.390 nvmf_trace.0 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.390 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.390 rmmod nvme_tcp 00:32:54.390 rmmod nvme_fabrics 00:32:54.390 rmmod nvme_keyring 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2908250 ']' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2908250 ']' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2908250' 00:32:54.651 killing process with pid 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2908250 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.651 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.195 00:32:57.195 real 0m44.570s 00:32:57.195 user 0m54.563s 00:32:57.195 sys 0m10.319s 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:57.195 ************************************ 00:32:57.195 END TEST nvmf_lvs_grow 00:32:57.195 ************************************ 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.195 ************************************ 00:32:57.195 START TEST nvmf_bdev_io_wait 00:32:57.195 ************************************ 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:57.195 * Looking for test storage... 00:32:57.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:57.195 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.196 --rc genhtml_branch_coverage=1 00:32:57.196 --rc genhtml_function_coverage=1 00:32:57.196 --rc genhtml_legend=1 00:32:57.196 --rc geninfo_all_blocks=1 00:32:57.196 --rc geninfo_unexecuted_blocks=1 00:32:57.196 00:32:57.196 ' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.196 --rc genhtml_branch_coverage=1 00:32:57.196 --rc genhtml_function_coverage=1 00:32:57.196 --rc genhtml_legend=1 00:32:57.196 --rc geninfo_all_blocks=1 00:32:57.196 --rc geninfo_unexecuted_blocks=1 00:32:57.196 00:32:57.196 ' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.196 --rc genhtml_branch_coverage=1 00:32:57.196 --rc genhtml_function_coverage=1 00:32:57.196 --rc genhtml_legend=1 00:32:57.196 --rc geninfo_all_blocks=1 00:32:57.196 --rc geninfo_unexecuted_blocks=1 00:32:57.196 00:32:57.196 ' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:57.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.196 --rc genhtml_branch_coverage=1 00:32:57.196 --rc genhtml_function_coverage=1 00:32:57.196 --rc genhtml_legend=1 00:32:57.196 --rc geninfo_all_blocks=1 00:32:57.196 --rc geninfo_unexecuted_blocks=1 00:32:57.196 00:32:57.196 ' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.196 16:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:05.336 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:05.336 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:05.336 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:05.336 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.336 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:33:05.337 00:33:05.337 --- 10.0.0.2 ping statistics --- 00:33:05.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.337 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:05.337 00:33:05.337 --- 10.0.0.1 ping statistics --- 00:33:05.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.337 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:05.337 16:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2912839 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2912839 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2912839 ']' 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:05.337 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.337 [2024-10-01 16:57:56.069949] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:05.337 [2024-10-01 16:57:56.071013] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:05.337 [2024-10-01 16:57:56.071063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.337 [2024-10-01 16:57:56.157090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:05.337 [2024-10-01 16:57:56.281661] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.337 [2024-10-01 16:57:56.281741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.337 [2024-10-01 16:57:56.281754] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.337 [2024-10-01 16:57:56.281763] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.337 [2024-10-01 16:57:56.281772] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.337 [2024-10-01 16:57:56.281913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.337 [2024-10-01 16:57:56.281987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.337 [2024-10-01 16:57:56.282092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:05.337 [2024-10-01 16:57:56.282098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.337 [2024-10-01 16:57:56.282630] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.598 [2024-10-01 16:57:57.129316] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:05.598 [2024-10-01 16:57:57.129453] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:05.598 [2024-10-01 16:57:57.130155] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:05.598 [2024-10-01 16:57:57.130694] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.598 [2024-10-01 16:57:57.143118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.598 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.598 Malloc0 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:05.599 [2024-10-01 16:57:57.219222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2912997 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2913000 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:05.599 { 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme$subsystem", 00:33:05.599 "trtype": "$TEST_TRANSPORT", 00:33:05.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "$NVMF_PORT", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.599 "hdgst": ${hdgst:-false}, 00:33:05.599 "ddgst": ${ddgst:-false} 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 } 00:33:05.599 EOF 00:33:05.599 )") 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2913003 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2913007 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:05.599 { 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme$subsystem", 00:33:05.599 "trtype": "$TEST_TRANSPORT", 00:33:05.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "$NVMF_PORT", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.599 "hdgst": ${hdgst:-false}, 00:33:05.599 "ddgst": ${ddgst:-false} 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 } 00:33:05.599 EOF 00:33:05.599 )") 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:05.599 { 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme$subsystem", 00:33:05.599 "trtype": "$TEST_TRANSPORT", 00:33:05.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "$NVMF_PORT", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.599 "hdgst": ${hdgst:-false}, 00:33:05.599 "ddgst": ${ddgst:-false} 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 } 00:33:05.599 EOF 00:33:05.599 )") 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:05.599 { 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme$subsystem", 00:33:05.599 "trtype": "$TEST_TRANSPORT", 00:33:05.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "$NVMF_PORT", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:05.599 "hdgst": ${hdgst:-false}, 00:33:05.599 "ddgst": ${ddgst:-false} 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 } 00:33:05.599 EOF 00:33:05.599 )") 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2912997 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme1", 00:33:05.599 "trtype": "tcp", 00:33:05.599 "traddr": "10.0.0.2", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "4420", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.599 "hdgst": false, 00:33:05.599 "ddgst": false 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 }' 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme1", 00:33:05.599 "trtype": "tcp", 00:33:05.599 "traddr": "10.0.0.2", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "4420", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.599 "hdgst": false, 00:33:05.599 "ddgst": false 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 }' 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:05.599 "params": { 00:33:05.599 "name": "Nvme1", 00:33:05.599 "trtype": "tcp", 00:33:05.599 "traddr": "10.0.0.2", 00:33:05.599 "adrfam": "ipv4", 00:33:05.599 "trsvcid": "4420", 00:33:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.599 "hdgst": false, 00:33:05.599 "ddgst": false 00:33:05.599 }, 00:33:05.599 "method": "bdev_nvme_attach_controller" 00:33:05.599 }' 00:33:05.599 16:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:05.600 "params": { 00:33:05.600 "name": "Nvme1", 00:33:05.600 "trtype": "tcp", 00:33:05.600 "traddr": "10.0.0.2", 00:33:05.600 "adrfam": "ipv4", 00:33:05.600 "trsvcid": "4420", 00:33:05.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:05.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:05.600 "hdgst": false, 00:33:05.600 "ddgst": false 00:33:05.600 }, 00:33:05.600 "method": "bdev_nvme_attach_controller" 00:33:05.600 }' 00:33:05.600 [2024-10-01 16:57:57.273600] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:05.600 [2024-10-01 16:57:57.273652] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:05.600 [2024-10-01 16:57:57.275015] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:05.600 [2024-10-01 16:57:57.275060] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:05.600 [2024-10-01 16:57:57.275080] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:05.600 [2024-10-01 16:57:57.275121] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:05.600 [2024-10-01 16:57:57.277522] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:05.600 [2024-10-01 16:57:57.277564] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:05.859 [2024-10-01 16:57:57.391024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.859 [2024-10-01 16:57:57.434412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:05.859 [2024-10-01 16:57:57.462966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.860 [2024-10-01 16:57:57.510816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.860 [2024-10-01 16:57:57.512663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:33:06.119 [2024-10-01 16:57:57.556498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.119 [2024-10-01 16:57:57.558860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:33:06.119 [2024-10-01 16:57:57.604198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:33:06.119 Running I/O for 1 seconds... 00:33:06.379 Running I/O for 1 seconds... 00:33:06.379 Running I/O for 1 seconds... 00:33:06.379 Running I/O for 1 seconds... 00:33:07.318 10755.00 IOPS, 42.01 MiB/s 00:33:07.318 Latency(us) 00:33:07.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.318 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:07.318 Nvme1n1 : 1.01 10737.12 41.94 0.00 0.00 11852.62 1569.08 22080.59 00:33:07.318 =================================================================================================================== 00:33:07.318 Total : 10737.12 41.94 0.00 0.00 11852.62 1569.08 22080.59 00:33:07.318 16:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2913000 00:33:07.318 20017.00 IOPS, 78.19 MiB/s 00:33:07.318 Latency(us) 00:33:07.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.318 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:07.318 Nvme1n1 : 1.01 20074.04 78.41 0.00 0.00 6360.65 2079.51 10183.29 00:33:07.318 =================================================================================================================== 00:33:07.318 Total : 20074.04 78.41 0.00 0.00 6360.65 2079.51 10183.29 00:33:07.318 193928.00 IOPS, 757.53 MiB/s 00:33:07.318 Latency(us) 00:33:07.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.318 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:07.318 Nvme1n1 : 1.00 193551.03 756.06 0.00 0.00 657.72 299.32 1915.67 00:33:07.318 =================================================================================================================== 00:33:07.318 Total : 193551.03 756.06 0.00 0.00 657.72 299.32 1915.67 00:33:07.579 10427.00 IOPS, 40.73 MiB/s 00:33:07.579 Latency(us) 00:33:07.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.579 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:07.579 Nvme1n1 : 1.01 10504.62 41.03 0.00 0.00 12154.48 3503.66 29844.09 00:33:07.579 =================================================================================================================== 00:33:07.579 Total : 10504.62 41.03 0.00 0.00 12154.48 3503.66 29844.09 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2913003 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2913007 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.579 rmmod nvme_tcp 00:33:07.579 rmmod nvme_fabrics 00:33:07.579 rmmod nvme_keyring 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2912839 ']' 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2912839 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2912839 ']' 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2912839 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:33:07.579 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2912839 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2912839' 00:33:07.839 killing process with pid 2912839 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2912839 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2912839 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.839 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.380 00:33:10.380 real 0m13.061s 00:33:10.380 user 0m16.151s 00:33:10.380 sys 0m7.436s 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:10.380 ************************************ 00:33:10.380 END TEST nvmf_bdev_io_wait 00:33:10.380 ************************************ 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:10.380 ************************************ 00:33:10.380 START TEST nvmf_queue_depth 00:33:10.380 ************************************ 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:10.380 * Looking for test storage... 00:33:10.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:10.380 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:10.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.381 --rc genhtml_branch_coverage=1 00:33:10.381 --rc genhtml_function_coverage=1 00:33:10.381 --rc genhtml_legend=1 00:33:10.381 --rc geninfo_all_blocks=1 00:33:10.381 --rc geninfo_unexecuted_blocks=1 00:33:10.381 00:33:10.381 ' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:10.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.381 --rc genhtml_branch_coverage=1 00:33:10.381 --rc genhtml_function_coverage=1 00:33:10.381 --rc genhtml_legend=1 00:33:10.381 --rc geninfo_all_blocks=1 00:33:10.381 --rc geninfo_unexecuted_blocks=1 00:33:10.381 00:33:10.381 ' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:10.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.381 --rc genhtml_branch_coverage=1 00:33:10.381 --rc genhtml_function_coverage=1 00:33:10.381 --rc genhtml_legend=1 00:33:10.381 --rc geninfo_all_blocks=1 00:33:10.381 --rc geninfo_unexecuted_blocks=1 00:33:10.381 00:33:10.381 ' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:10.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.381 --rc genhtml_branch_coverage=1 00:33:10.381 --rc genhtml_function_coverage=1 00:33:10.381 --rc genhtml_legend=1 00:33:10.381 --rc geninfo_all_blocks=1 00:33:10.381 --rc geninfo_unexecuted_blocks=1 00:33:10.381 00:33:10.381 ' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.381 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.382 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.382 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:10.382 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:10.382 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.382 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:16.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:16.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:16.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:16.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:33:16.967 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.968 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.228 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.228 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.228 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.229 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.229 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.229 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:33:17.490 00:33:17.490 --- 10.0.0.2 ping statistics --- 00:33:17.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.490 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:33:17.490 00:33:17.490 --- 10.0.0.1 ping statistics --- 00:33:17.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.490 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:17.490 16:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2917385 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2917385 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2917385 ']' 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.490 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.490 [2024-10-01 16:58:09.074621] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.490 [2024-10-01 16:58:09.075667] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:17.490 [2024-10-01 16:58:09.075712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.490 [2024-10-01 16:58:09.140053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.750 [2024-10-01 16:58:09.204923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.750 [2024-10-01 16:58:09.204960] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.750 [2024-10-01 16:58:09.204966] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.750 [2024-10-01 16:58:09.204976] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.750 [2024-10-01 16:58:09.204981] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.750 [2024-10-01 16:58:09.204998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.750 [2024-10-01 16:58:09.255731] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:17.750 [2024-10-01 16:58:09.255925] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.750 [2024-10-01 16:58:09.329701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.750 Malloc0 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.750 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:17.751 [2024-10-01 16:58:09.401460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2917423 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2917423 /var/tmp/bdevperf.sock 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2917423 ']' 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:17.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.751 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:18.011 [2024-10-01 16:58:09.455708] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:18.011 [2024-10-01 16:58:09.455752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917423 ] 00:33:18.011 [2024-10-01 16:58:09.531041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.011 [2024-10-01 16:58:09.592188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.011 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.011 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:33:18.011 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.011 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.011 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:18.271 NVMe0n1 00:33:18.271 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.271 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:18.271 Running I/O for 10 seconds... 00:33:28.552 11310.00 IOPS, 44.18 MiB/s 11883.50 IOPS, 46.42 MiB/s 12158.67 IOPS, 47.49 MiB/s 12288.00 IOPS, 48.00 MiB/s 12294.00 IOPS, 48.02 MiB/s 12300.67 IOPS, 48.05 MiB/s 12379.71 IOPS, 48.36 MiB/s 12419.88 IOPS, 48.52 MiB/s 12422.44 IOPS, 48.53 MiB/s 12483.10 IOPS, 48.76 MiB/s 00:33:28.552 Latency(us) 00:33:28.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:28.552 Verification LBA range: start 0x0 length 0x4000 00:33:28.552 NVMe0n1 : 10.07 12498.25 48.82 0.00 0.00 81631.52 22887.19 56461.78 00:33:28.552 =================================================================================================================== 00:33:28.552 Total : 12498.25 48.82 0.00 0.00 81631.52 22887.19 56461.78 00:33:28.552 { 00:33:28.552 "results": [ 00:33:28.552 { 00:33:28.552 "job": "NVMe0n1", 00:33:28.552 "core_mask": "0x1", 00:33:28.552 "workload": "verify", 00:33:28.552 "status": "finished", 00:33:28.552 "verify_range": { 00:33:28.552 "start": 0, 00:33:28.552 "length": 16384 00:33:28.552 }, 00:33:28.552 "queue_depth": 1024, 00:33:28.552 "io_size": 4096, 00:33:28.552 "runtime": 10.067893, 00:33:28.552 "iops": 12498.245660735569, 00:33:28.552 "mibps": 48.821272112248316, 00:33:28.552 "io_failed": 0, 00:33:28.552 "io_timeout": 0, 00:33:28.552 "avg_latency_us": 81631.52153769127, 00:33:28.552 "min_latency_us": 22887.187692307692, 00:33:28.552 "max_latency_us": 56461.78461538462 00:33:28.552 } 00:33:28.552 ], 00:33:28.552 "core_count": 1 00:33:28.552 } 00:33:28.552 16:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2917423 00:33:28.552 16:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2917423 ']' 00:33:28.552 16:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2917423 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2917423 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2917423' 00:33:28.552 killing process with pid 2917423 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2917423 00:33:28.552 Received shutdown signal, test time was about 10.000000 seconds 00:33:28.552 00:33:28.552 Latency(us) 00:33:28.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.552 =================================================================================================================== 00:33:28.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2917423 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:28.552 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:28.553 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:28.553 rmmod nvme_tcp 00:33:28.553 rmmod nvme_fabrics 00:33:28.813 rmmod nvme_keyring 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2917385 ']' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2917385 ']' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2917385' 00:33:28.814 killing process with pid 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2917385 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.814 16:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.355 00:33:31.355 real 0m20.932s 00:33:31.355 user 0m23.468s 00:33:31.355 sys 0m6.783s 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.355 ************************************ 00:33:31.355 END TEST nvmf_queue_depth 00:33:31.355 ************************************ 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:31.355 ************************************ 00:33:31.355 START TEST nvmf_target_multipath 00:33:31.355 ************************************ 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:31.355 * Looking for test storage... 00:33:31.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.355 --rc genhtml_branch_coverage=1 00:33:31.355 --rc genhtml_function_coverage=1 00:33:31.355 --rc genhtml_legend=1 00:33:31.355 --rc geninfo_all_blocks=1 00:33:31.355 --rc geninfo_unexecuted_blocks=1 00:33:31.355 00:33:31.355 ' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.355 --rc genhtml_branch_coverage=1 00:33:31.355 --rc genhtml_function_coverage=1 00:33:31.355 --rc genhtml_legend=1 00:33:31.355 --rc geninfo_all_blocks=1 00:33:31.355 --rc geninfo_unexecuted_blocks=1 00:33:31.355 00:33:31.355 ' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.355 --rc genhtml_branch_coverage=1 00:33:31.355 --rc genhtml_function_coverage=1 00:33:31.355 --rc genhtml_legend=1 00:33:31.355 --rc geninfo_all_blocks=1 00:33:31.355 --rc geninfo_unexecuted_blocks=1 00:33:31.355 00:33:31.355 ' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.355 --rc genhtml_branch_coverage=1 00:33:31.355 --rc genhtml_function_coverage=1 00:33:31.355 --rc genhtml_legend=1 00:33:31.355 --rc geninfo_all_blocks=1 00:33:31.355 --rc geninfo_unexecuted_blocks=1 00:33:31.355 00:33:31.355 ' 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.355 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:31.356 16:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:39.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:39.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:39.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:39.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.499 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.500 16:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:33:39.500 00:33:39.500 --- 10.0.0.2 ping statistics --- 00:33:39.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.500 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:33:39.500 00:33:39.500 --- 10.0.0.1 ping statistics --- 00:33:39.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.500 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:39.500 only one NIC for nvmf test 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.500 rmmod nvme_tcp 00:33:39.500 rmmod nvme_fabrics 00:33:39.500 rmmod nvme_keyring 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.500 16:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.884 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.885 00:33:40.885 real 0m9.697s 00:33:40.885 user 0m2.100s 00:33:40.885 sys 0m5.502s 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:40.885 ************************************ 00:33:40.885 END TEST nvmf_target_multipath 00:33:40.885 ************************************ 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:40.885 ************************************ 00:33:40.885 START TEST nvmf_zcopy 00:33:40.885 ************************************ 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:40.885 * Looking for test storage... 00:33:40.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:40.885 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.145 --rc genhtml_branch_coverage=1 00:33:41.145 --rc genhtml_function_coverage=1 00:33:41.145 --rc genhtml_legend=1 00:33:41.145 --rc geninfo_all_blocks=1 00:33:41.145 --rc geninfo_unexecuted_blocks=1 00:33:41.145 00:33:41.145 ' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.145 --rc genhtml_branch_coverage=1 00:33:41.145 --rc genhtml_function_coverage=1 00:33:41.145 --rc genhtml_legend=1 00:33:41.145 --rc geninfo_all_blocks=1 00:33:41.145 --rc geninfo_unexecuted_blocks=1 00:33:41.145 00:33:41.145 ' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.145 --rc genhtml_branch_coverage=1 00:33:41.145 --rc genhtml_function_coverage=1 00:33:41.145 --rc genhtml_legend=1 00:33:41.145 --rc geninfo_all_blocks=1 00:33:41.145 --rc geninfo_unexecuted_blocks=1 00:33:41.145 00:33:41.145 ' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.145 --rc genhtml_branch_coverage=1 00:33:41.145 --rc genhtml_function_coverage=1 00:33:41.145 --rc genhtml_legend=1 00:33:41.145 --rc geninfo_all_blocks=1 00:33:41.145 --rc geninfo_unexecuted_blocks=1 00:33:41.145 00:33:41.145 ' 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.145 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.146 16:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.730 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:47.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:47.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:47.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:47.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:33:47.731 00:33:47.731 --- 10.0.0.2 ping statistics --- 00:33:47.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.731 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:33:47.731 00:33:47.731 --- 10.0.0.1 ping statistics --- 00:33:47.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.731 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2926909 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2926909 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2926909 ']' 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:47.731 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.731 [2024-10-01 16:58:39.406730] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.731 [2024-10-01 16:58:39.407748] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:47.731 [2024-10-01 16:58:39.407798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.991 [2024-10-01 16:58:39.470610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.991 [2024-10-01 16:58:39.524166] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.991 [2024-10-01 16:58:39.524199] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.991 [2024-10-01 16:58:39.524208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.991 [2024-10-01 16:58:39.524213] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.991 [2024-10-01 16:58:39.524217] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.991 [2024-10-01 16:58:39.524233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.991 [2024-10-01 16:58:39.572623] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.991 [2024-10-01 16:58:39.572813] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.991 [2024-10-01 16:58:39.648596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.991 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.991 [2024-10-01 16:58:39.673166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:48.252 malloc0 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:48.252 { 00:33:48.252 "params": { 00:33:48.252 "name": "Nvme$subsystem", 00:33:48.252 "trtype": "$TEST_TRANSPORT", 00:33:48.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:48.252 "adrfam": "ipv4", 00:33:48.252 "trsvcid": "$NVMF_PORT", 00:33:48.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:48.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:48.252 "hdgst": ${hdgst:-false}, 00:33:48.252 "ddgst": ${ddgst:-false} 00:33:48.252 }, 00:33:48.252 "method": "bdev_nvme_attach_controller" 00:33:48.252 } 00:33:48.252 EOF 00:33:48.252 )") 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:48.252 16:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:48.252 "params": { 00:33:48.252 "name": "Nvme1", 00:33:48.252 "trtype": "tcp", 00:33:48.252 "traddr": "10.0.0.2", 00:33:48.252 "adrfam": "ipv4", 00:33:48.252 "trsvcid": "4420", 00:33:48.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:48.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:48.252 "hdgst": false, 00:33:48.252 "ddgst": false 00:33:48.252 }, 00:33:48.252 "method": "bdev_nvme_attach_controller" 00:33:48.252 }' 00:33:48.252 [2024-10-01 16:58:39.776512] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:48.252 [2024-10-01 16:58:39.776558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927061 ] 00:33:48.252 [2024-10-01 16:58:39.852294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.252 [2024-10-01 16:58:39.913940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.512 Running I/O for 10 seconds... 00:33:58.737 9161.00 IOPS, 71.57 MiB/s 9211.00 IOPS, 71.96 MiB/s 9224.33 IOPS, 72.07 MiB/s 9239.00 IOPS, 72.18 MiB/s 9245.60 IOPS, 72.23 MiB/s 9252.50 IOPS, 72.29 MiB/s 9257.71 IOPS, 72.33 MiB/s 9262.12 IOPS, 72.36 MiB/s 9264.11 IOPS, 72.38 MiB/s 9266.90 IOPS, 72.40 MiB/s 00:33:58.737 Latency(us) 00:33:58.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:58.737 Verification LBA range: start 0x0 length 0x1000 00:33:58.737 Nvme1n1 : 10.05 9232.67 72.13 0.00 0.00 13763.86 2495.41 42951.29 00:33:58.737 =================================================================================================================== 00:33:58.737 Total : 9232.67 72.13 0.00 0.00 13763.86 2495.41 42951.29 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2928634 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:58.737 { 00:33:58.737 "params": { 00:33:58.737 "name": "Nvme$subsystem", 00:33:58.737 "trtype": "$TEST_TRANSPORT", 00:33:58.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.737 "adrfam": "ipv4", 00:33:58.737 "trsvcid": "$NVMF_PORT", 00:33:58.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.737 "hdgst": ${hdgst:-false}, 00:33:58.737 "ddgst": ${ddgst:-false} 00:33:58.737 }, 00:33:58.737 "method": "bdev_nvme_attach_controller" 00:33:58.737 } 00:33:58.737 EOF 00:33:58.737 )") 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:58.737 [2024-10-01 16:58:50.308515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.737 [2024-10-01 16:58:50.308542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:58.737 16:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:58.737 "params": { 00:33:58.737 "name": "Nvme1", 00:33:58.737 "trtype": "tcp", 00:33:58.737 "traddr": "10.0.0.2", 00:33:58.737 "adrfam": "ipv4", 00:33:58.737 "trsvcid": "4420", 00:33:58.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:58.737 "hdgst": false, 00:33:58.737 "ddgst": false 00:33:58.737 }, 00:33:58.737 "method": "bdev_nvme_attach_controller" 00:33:58.737 }' 00:33:58.737 [2024-10-01 16:58:50.320485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.737 [2024-10-01 16:58:50.320495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.737 [2024-10-01 16:58:50.332483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.737 [2024-10-01 16:58:50.332491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.737 [2024-10-01 16:58:50.344484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.737 [2024-10-01 16:58:50.344492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.737 [2024-10-01 16:58:50.352522] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:33:58.738 [2024-10-01 16:58:50.352568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928634 ] 00:33:58.738 [2024-10-01 16:58:50.356483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.356492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.738 [2024-10-01 16:58:50.368484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.368492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.738 [2024-10-01 16:58:50.380483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.380492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.738 [2024-10-01 16:58:50.392483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.392492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.738 [2024-10-01 16:58:50.404483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.404491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.738 [2024-10-01 16:58:50.416483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.738 [2024-10-01 16:58:50.416491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.998 [2024-10-01 16:58:50.427979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.998 [2024-10-01 16:58:50.428483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.998 [2024-10-01 16:58:50.428491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.998 [2024-10-01 16:58:50.440485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.998 [2024-10-01 16:58:50.440495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.998 [2024-10-01 16:58:50.452483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.452493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.464485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.464500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.476484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.476493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.488483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.488492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.489217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.999 [2024-10-01 16:58:50.500488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.500501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.512491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.512506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.524486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.524494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.536485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.536494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.548483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.548491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.560493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.560509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.572485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.572495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.584485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.584496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.596485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.596495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.608483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.608492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.620483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.620491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.632483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.632491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.644484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.644495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.656483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.656491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.668482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.668490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.999 [2024-10-01 16:58:50.680482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.999 [2024-10-01 16:58:50.680491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.692483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.692494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.704483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.704492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.716483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.716491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.728483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.728493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.740488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.740504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 Running I/O for 5 seconds... 00:33:59.260 [2024-10-01 16:58:50.757446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.757463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.771719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.771739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.784899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.784917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.799839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.799856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.812966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.812988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.827733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.827750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.841103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.841122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.855711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.855728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.868942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.868958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.883883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.883899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.897662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.897678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.912043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.912059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.925550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.925566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.260 [2024-10-01 16:58:50.940664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.260 [2024-10-01 16:58:50.940680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:50.952858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:50.952874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:50.967998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:50.968014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:50.981821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:50.981836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:50.996077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:50.996093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.009261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.009277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.023431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.023447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.036332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.036348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.049631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.049646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.064114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.064130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.077147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.077163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.091829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.091844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.105153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.105176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.120439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.120455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.132859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.132874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.147952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.147968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.160438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.160454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.173475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.173491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.187999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.188015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.520 [2024-10-01 16:58:51.201445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.520 [2024-10-01 16:58:51.201461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.215557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.215573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.228612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.228627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.244363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.244379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.257480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.257496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.271885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.271901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.285267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.285283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.299339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.299355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.312624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.312639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.324710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.324727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.337462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.337478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.351601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.351617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.364634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.364653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.379904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.379920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.393070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.393085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.408576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.408592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.421962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.421982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.435854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.435870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.780 [2024-10-01 16:58:51.448680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.780 [2024-10-01 16:58:51.448695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.039 [2024-10-01 16:58:51.463824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.463840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.477275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.477290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.490883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.490899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.504736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.504751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.516211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.516227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.529581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.529596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.544274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.544289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.557924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.557940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.571906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.571922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.584480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.584495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.597525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.597541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.611416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.611432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.624515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.624531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.636954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.636972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.651401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.651416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.664631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.664647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.679912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.679928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.692630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.692645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.708121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.708136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.040 [2024-10-01 16:58:51.721207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.040 [2024-10-01 16:58:51.721223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.736244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.736260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.748236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.748252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 18243.00 IOPS, 142.52 MiB/s [2024-10-01 16:58:51.760996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.761011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.776208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.776224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.789479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.789494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.803385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.803400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.816670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.816686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.829427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.829443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.843972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.843989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.857204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.857220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.872627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.872644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.884172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.884189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.897276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.897292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.912748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.912764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.928080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.928096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.941188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.941204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.955474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.955490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.301 [2024-10-01 16:58:51.969207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.301 [2024-10-01 16:58:51.969223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:51.984319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:51.984335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:51.997305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:51.997320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.011657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.011673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.025429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.025445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.039642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.039657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.053002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.053017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.068100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.068115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.081234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.081250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.096177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.096193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.109236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.109252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.123720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.123736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.136968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.136988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.152152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.152169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.165421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.165437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.179791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.179807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.192960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.192980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.207420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.207437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.220373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.220390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.563 [2024-10-01 16:58:52.232628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.563 [2024-10-01 16:58:52.232644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.823 [2024-10-01 16:58:52.248150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.248167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.261248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.261264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.276594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.276611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.289264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.289280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.304122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.304138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.316221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.316238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.329571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.329587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.344087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.344104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.357445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.357461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.372017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.372033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.385253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.385268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.400118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.400139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.413409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.413425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.427605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.427622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.441211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.441227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.455381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.455397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.468436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.468452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.481375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.481391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.824 [2024-10-01 16:58:52.496022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.824 [2024-10-01 16:58:52.496038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.508982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.508999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.523557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.523573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.537161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.537177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.552384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.552399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.565577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.565593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.579715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.579731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.593290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.593306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.607675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.607691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.621236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.621252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.635310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.635326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.648638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.648654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.664138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.664160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.676678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.676693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.691755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.691772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.705359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.705375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.719825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.719842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.733210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.733226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 [2024-10-01 16:58:52.747604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.747619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.084 18310.00 IOPS, 143.05 MiB/s [2024-10-01 16:58:52.760414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.084 [2024-10-01 16:58:52.760430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.773912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.773928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.787442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.787458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.801020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.801036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.816466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.816482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.829503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.829518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.843652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.843668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.857013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.857029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.872058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.872074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.885208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.885224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.899904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.899919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.913135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.913150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.927146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.927166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.940547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.940562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.952348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.952364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.965668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.965683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.979655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.979671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:52.992831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:52.992847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:53.007668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:53.007684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.345 [2024-10-01 16:58:53.020973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.345 [2024-10-01 16:58:53.020989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.035685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.035702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.049007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.049023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.064143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.064158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.076874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.076890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.092052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.092068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.105493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.105508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.119765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.119781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.133282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.133297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.147228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.147244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.161054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.161069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.175276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.175292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.188877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.188893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.203545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.203561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.217243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.217258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.231493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.231510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.244957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.244978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.259105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.259122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.272578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.272595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.606 [2024-10-01 16:58:53.285162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.606 [2024-10-01 16:58:53.285178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.299635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.299651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.313080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.313095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.327760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.327776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.341265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.341280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.355801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.355817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.368483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.368499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.381344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.381360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.395672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.395687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.408932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.408948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.423976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.423993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.436919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.436934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.449024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.449039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.463686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.463701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.477102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.477118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.491761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.491777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.505405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.505421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.519442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.519459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.532778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.532793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.868 [2024-10-01 16:58:53.548126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.868 [2024-10-01 16:58:53.548142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.560494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.560511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.573820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.573836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.588272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.588288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.600629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.600645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.612897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.612913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.628151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.628167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.640316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.640331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.652274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.652290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.129 [2024-10-01 16:58:53.665445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.129 [2024-10-01 16:58:53.665460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.679877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.679893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.693036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.693052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.708099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.708115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.720340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.720357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.733387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.733403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.747469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.747485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 18326.00 IOPS, 143.17 MiB/s [2024-10-01 16:58:53.761172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.761188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.775803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.775820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.788802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.788817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.130 [2024-10-01 16:58:53.804501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.130 [2024-10-01 16:58:53.804518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.816364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.816380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.829364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.829379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.844737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.844753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.858026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.858042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.872247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.872262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.885007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.885022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.899614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.899630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.913107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.390 [2024-10-01 16:58:53.913123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.390 [2024-10-01 16:58:53.927652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.927667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:53.941177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.941192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:53.955908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.955928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:53.969222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.969238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:53.984183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.984199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:53.997555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:53.997571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:54.011977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:54.011993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:54.024980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:54.024995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:54.040242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:54.040258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:54.052842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:54.052858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.391 [2024-10-01 16:58:54.067138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.391 [2024-10-01 16:58:54.067155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.080620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.080636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.093102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.093118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.107836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.107853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.121802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.121818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.135870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.135886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.148666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.148682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.164160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.164176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.176764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.176780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.192372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.192388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.651 [2024-10-01 16:58:54.205326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.651 [2024-10-01 16:58:54.205342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.219893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.219914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.233345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.233361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.247791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.247807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.261342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.261358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.275416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.275433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.289214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.289229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.303954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.303975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.317655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.317671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.652 [2024-10-01 16:58:54.331846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.652 [2024-10-01 16:58:54.331862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.345016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.345032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.360047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.360063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.373362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.373379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.387365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.387381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.400959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.400980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.415264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.415280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.429086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.429102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.443728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.443743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.457206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.457222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.471803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.471819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.485224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.485245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.494827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.494844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.508279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.508295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.521257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.521273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.532492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.532507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.545291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.545307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.556562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.556578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.569606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.569623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.579448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.579463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.912 [2024-10-01 16:58:54.593046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.912 [2024-10-01 16:58:54.593062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.604378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.604394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.617740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.617756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.627849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.627864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.641324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.641339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.651211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.651226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.664886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.664901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.675178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.675194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.688759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.688774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.701639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.701655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.711753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.711773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.724981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.724997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.735003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.735019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.748395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.748411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.758736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.758752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 18320.50 IOPS, 143.13 MiB/s [2024-10-01 16:58:54.767836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.767851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.781406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.781422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.791541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.791557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.805273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.805289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.815046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.815062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.828325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.828341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.841845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.841861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.173 [2024-10-01 16:58:54.851490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.173 [2024-10-01 16:58:54.851505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.433 [2024-10-01 16:58:54.864832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.864847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.876448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.876463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.889739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.889754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.899412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.899428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.912919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.912935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.925620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.925635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.935377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.935392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.948770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.948785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.961609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.961625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.971887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.971902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.985234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.985250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:54.995455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:54.995470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.008754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.008769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.021192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.021207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.031223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.031239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.044955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.044974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.055154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.055170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.068837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.068852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.081792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.081808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.091763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.091778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.105269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.105285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.434 [2024-10-01 16:58:55.115580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.434 [2024-10-01 16:58:55.115596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.129126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.129142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.139211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.139227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.152742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.152757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.163593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.163609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.176960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.176982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.188270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.188286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.201367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.201382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.211532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.211548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.224928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.224943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.235023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.235038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.248134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.248150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.261086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.261101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.271551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.271567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.284938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.284954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.694 [2024-10-01 16:58:55.295133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.694 [2024-10-01 16:58:55.295149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.308637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.308653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.320613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.320629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.333134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.333149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.343439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.343453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.357160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.357176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.695 [2024-10-01 16:58:55.369265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.695 [2024-10-01 16:58:55.369280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.955 [2024-10-01 16:58:55.380905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.955 [2024-10-01 16:58:55.380925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.955 [2024-10-01 16:58:55.393247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.955 [2024-10-01 16:58:55.393263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.955 [2024-10-01 16:58:55.403239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.955 [2024-10-01 16:58:55.403254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.955 [2024-10-01 16:58:55.416736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.416751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.427052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.427067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.440222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.440237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.453189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.453205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.465025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.465040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.477072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.477087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.486932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.486948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.495368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.495383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.509002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.509017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.519712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.519727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.533112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.533127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.543154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.543171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.556357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.556373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.568279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.568295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.581116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.581133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.592552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.592568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.604592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.604613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.611166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.611181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.617974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.617989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.956 [2024-10-01 16:58:55.627645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.956 [2024-10-01 16:58:55.627661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.640894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.640910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.653628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.653643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.662962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.662982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.676204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.676220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.688579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.688595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.700513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.700529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.713590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.713606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.723983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.723999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.737419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.737434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.747768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.747783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.760914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.760929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 18321.00 IOPS, 143.13 MiB/s [2024-10-01 16:58:55.768490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.768505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 00:34:04.217 Latency(us) 00:34:04.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.217 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:04.217 Nvme1n1 : 5.01 18323.11 143.15 0.00 0.00 6979.34 2734.87 12451.84 00:34:04.217 =================================================================================================================== 00:34:04.217 Total : 18323.11 143.15 0.00 0.00 6979.34 2734.87 12451.84 00:34:04.217 [2024-10-01 16:58:55.776486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.776502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.784486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.784498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.792494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.792506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.800487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.800498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.808487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.808495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.816485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.816494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.824484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.824493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.832484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.832493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.840484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.840492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.848483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.848491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.856485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.856496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.864485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.864494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.872484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.872493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.880485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.880495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.888483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.888492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.217 [2024-10-01 16:58:55.896483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.217 [2024-10-01 16:58:55.896491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2928634) - No such process 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2928634 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.478 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.478 delay0 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.479 16:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:04.479 [2024-10-01 16:58:56.082112] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:12.611 Initializing NVMe Controllers 00:34:12.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:12.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:12.611 Initialization complete. Launching workers. 00:34:12.611 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5313 00:34:12.611 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5600, failed to submit 33 00:34:12.611 success 5393, unsuccessful 207, failed 0 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.611 rmmod nvme_tcp 00:34:12.611 rmmod nvme_fabrics 00:34:12.611 rmmod nvme_keyring 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2926909 ']' 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2926909 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2926909 ']' 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2926909 00:34:12.611 16:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2926909 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2926909' 00:34:12.611 killing process with pid 2926909 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2926909 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2926909 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.611 16:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.994 00:34:13.994 real 0m32.872s 00:34:13.994 user 0m43.390s 00:34:13.994 sys 0m11.774s 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:13.994 ************************************ 00:34:13.994 END TEST nvmf_zcopy 00:34:13.994 ************************************ 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.994 ************************************ 00:34:13.994 START TEST nvmf_nmic 00:34:13.994 ************************************ 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:13.994 * Looking for test storage... 00:34:13.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:13.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.994 --rc genhtml_branch_coverage=1 00:34:13.994 --rc genhtml_function_coverage=1 00:34:13.994 --rc genhtml_legend=1 00:34:13.994 --rc geninfo_all_blocks=1 00:34:13.994 --rc geninfo_unexecuted_blocks=1 00:34:13.994 00:34:13.994 ' 00:34:13.994 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:13.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.994 --rc genhtml_branch_coverage=1 00:34:13.995 --rc genhtml_function_coverage=1 00:34:13.995 --rc genhtml_legend=1 00:34:13.995 --rc geninfo_all_blocks=1 00:34:13.995 --rc geninfo_unexecuted_blocks=1 00:34:13.995 00:34:13.995 ' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:13.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.995 --rc genhtml_branch_coverage=1 00:34:13.995 --rc genhtml_function_coverage=1 00:34:13.995 --rc genhtml_legend=1 00:34:13.995 --rc geninfo_all_blocks=1 00:34:13.995 --rc geninfo_unexecuted_blocks=1 00:34:13.995 00:34:13.995 ' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:13.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.995 --rc genhtml_branch_coverage=1 00:34:13.995 --rc genhtml_function_coverage=1 00:34:13.995 --rc genhtml_legend=1 00:34:13.995 --rc geninfo_all_blocks=1 00:34:13.995 --rc geninfo_unexecuted_blocks=1 00:34:13.995 00:34:13.995 ' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.995 16:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:22.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:22.131 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:22.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:22.131 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:34:22.131 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:34:22.132 00:34:22.132 --- 10.0.0.2 ping statistics --- 00:34:22.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.132 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:34:22.132 00:34:22.132 --- 10.0.0.1 ping statistics --- 00:34:22.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.132 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2934630 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2934630 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2934630 ']' 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.132 16:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 [2024-10-01 16:59:12.806392] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.132 [2024-10-01 16:59:12.807494] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:34:22.132 [2024-10-01 16:59:12.807545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.132 [2024-10-01 16:59:12.896697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.132 [2024-10-01 16:59:12.992333] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.132 [2024-10-01 16:59:12.992389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.132 [2024-10-01 16:59:12.992397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.132 [2024-10-01 16:59:12.992404] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.132 [2024-10-01 16:59:12.992414] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.132 [2024-10-01 16:59:12.992540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.132 [2024-10-01 16:59:12.992664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.132 [2024-10-01 16:59:12.992795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.132 [2024-10-01 16:59:12.992798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.132 [2024-10-01 16:59:13.083363] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:22.132 [2024-10-01 16:59:13.083531] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:22.132 [2024-10-01 16:59:13.083721] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:22.132 [2024-10-01 16:59:13.084041] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:22.132 [2024-10-01 16:59:13.084326] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 [2024-10-01 16:59:13.749700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 Malloc0 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.132 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 [2024-10-01 16:59:13.821866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:22.392 test case1: single bdev can't be used in multiple subsystems 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 [2024-10-01 16:59:13.857286] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:22.392 [2024-10-01 16:59:13.857305] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:22.392 [2024-10-01 16:59:13.857312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.392 request: 00:34:22.392 { 00:34:22.392 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:22.392 "namespace": { 00:34:22.392 "bdev_name": "Malloc0", 00:34:22.392 "no_auto_visible": false 00:34:22.392 }, 00:34:22.392 "method": "nvmf_subsystem_add_ns", 00:34:22.392 "req_id": 1 00:34:22.392 } 00:34:22.392 Got JSON-RPC error response 00:34:22.392 response: 00:34:22.392 { 00:34:22.392 "code": -32602, 00:34:22.392 "message": "Invalid parameters" 00:34:22.392 } 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:22.392 Adding namespace failed - expected result. 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:22.392 test case2: host connect to nvmf target in multiple paths 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:22.392 [2024-10-01 16:59:13.869387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.392 16:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:22.651 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:22.912 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:22.912 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:34:22.912 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:22.912 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:22.912 16:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:34:25.449 16:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:25.449 [global] 00:34:25.449 thread=1 00:34:25.449 invalidate=1 00:34:25.449 rw=write 00:34:25.449 time_based=1 00:34:25.449 runtime=1 00:34:25.449 ioengine=libaio 00:34:25.449 direct=1 00:34:25.449 bs=4096 00:34:25.449 iodepth=1 00:34:25.449 norandommap=0 00:34:25.449 numjobs=1 00:34:25.449 00:34:25.449 verify_dump=1 00:34:25.449 verify_backlog=512 00:34:25.449 verify_state_save=0 00:34:25.449 do_verify=1 00:34:25.449 verify=crc32c-intel 00:34:25.449 [job0] 00:34:25.449 filename=/dev/nvme0n1 00:34:25.449 Could not set queue depth (nvme0n1) 00:34:25.449 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.449 fio-3.35 00:34:25.449 Starting 1 thread 00:34:26.829 00:34:26.829 job0: (groupid=0, jobs=1): err= 0: pid=2935473: Tue Oct 1 16:59:18 2024 00:34:26.829 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:34:26.829 slat (nsec): min=9811, max=27463, avg=25832.47, stdev=3892.52 00:34:26.829 clat (usec): min=41639, max=42011, avg=41947.40, stdev=79.65 00:34:26.829 lat (usec): min=41649, max=42038, avg=41973.24, stdev=83.30 00:34:26.829 clat percentiles (usec): 00:34:26.829 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:26.829 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:26.829 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:26.829 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:26.829 | 99.99th=[42206] 00:34:26.829 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:34:26.829 slat (nsec): min=8904, max=64262, avg=28460.58, stdev=10649.77 00:34:26.829 clat (usec): min=116, max=641, avg=423.70, stdev=103.17 00:34:26.829 lat (usec): min=125, max=674, avg=452.16, stdev=108.96 00:34:26.829 clat percentiles (usec): 00:34:26.829 | 1.00th=[ 192], 5.00th=[ 239], 10.00th=[ 273], 20.00th=[ 343], 00:34:26.829 | 30.00th=[ 375], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 461], 00:34:26.829 | 70.00th=[ 469], 80.00th=[ 502], 90.00th=[ 553], 95.00th=[ 594], 00:34:26.829 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 644], 99.95th=[ 644], 00:34:26.829 | 99.99th=[ 644] 00:34:26.829 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:26.829 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:26.830 lat (usec) : 250=7.16%, 500=69.49%, 750=19.77% 00:34:26.830 lat (msec) : 50=3.58% 00:34:26.830 cpu : usr=1.45%, sys=1.26%, ctx=531, majf=0, minf=1 00:34:26.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.830 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:26.830 00:34:26.830 Run status group 0 (all jobs): 00:34:26.830 READ: bw=73.6KiB/s (75.3kB/s), 73.6KiB/s-73.6KiB/s (75.3kB/s-75.3kB/s), io=76.0KiB (77.8kB), run=1033-1033msec 00:34:26.830 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:34:26.830 00:34:26.830 Disk stats (read/write): 00:34:26.830 nvme0n1: ios=65/512, merge=0/0, ticks=667/194, in_queue=861, util=92.69% 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:26.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.830 rmmod nvme_tcp 00:34:26.830 rmmod nvme_fabrics 00:34:26.830 rmmod nvme_keyring 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2934630 ']' 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2934630 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2934630 ']' 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2934630 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2934630 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2934630' 00:34:26.830 killing process with pid 2934630 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2934630 00:34:26.830 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2934630 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.090 16:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.002 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.002 00:34:29.002 real 0m15.295s 00:34:29.002 user 0m29.510s 00:34:29.002 sys 0m7.232s 00:34:29.002 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:29.002 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:29.002 ************************************ 00:34:29.002 END TEST nvmf_nmic 00:34:29.002 ************************************ 00:34:29.002 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:29.002 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:29.264 ************************************ 00:34:29.264 START TEST nvmf_fio_target 00:34:29.264 ************************************ 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:29.264 * Looking for test storage... 00:34:29.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:29.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.264 --rc genhtml_branch_coverage=1 00:34:29.264 --rc genhtml_function_coverage=1 00:34:29.264 --rc genhtml_legend=1 00:34:29.264 --rc geninfo_all_blocks=1 00:34:29.264 --rc geninfo_unexecuted_blocks=1 00:34:29.264 00:34:29.264 ' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:29.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.264 --rc genhtml_branch_coverage=1 00:34:29.264 --rc genhtml_function_coverage=1 00:34:29.264 --rc genhtml_legend=1 00:34:29.264 --rc geninfo_all_blocks=1 00:34:29.264 --rc geninfo_unexecuted_blocks=1 00:34:29.264 00:34:29.264 ' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:29.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.264 --rc genhtml_branch_coverage=1 00:34:29.264 --rc genhtml_function_coverage=1 00:34:29.264 --rc genhtml_legend=1 00:34:29.264 --rc geninfo_all_blocks=1 00:34:29.264 --rc geninfo_unexecuted_blocks=1 00:34:29.264 00:34:29.264 ' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:29.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.264 --rc genhtml_branch_coverage=1 00:34:29.264 --rc genhtml_function_coverage=1 00:34:29.264 --rc genhtml_legend=1 00:34:29.264 --rc geninfo_all_blocks=1 00:34:29.264 --rc geninfo_unexecuted_blocks=1 00:34:29.264 00:34:29.264 ' 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.264 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.526 16:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.106 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:36.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:36.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:36.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:36.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.107 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:36.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:34:36.367 00:34:36.367 --- 10.0.0.2 ping statistics --- 00:34:36.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.367 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:34:36.367 00:34:36.367 --- 10.0.0.1 ping statistics --- 00:34:36.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.367 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:36.367 16:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2939619 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2939619 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2939619 ']' 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.367 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.627 [2024-10-01 16:59:28.066673] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:36.627 [2024-10-01 16:59:28.067635] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:34:36.627 [2024-10-01 16:59:28.067672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.627 [2024-10-01 16:59:28.151340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.627 [2024-10-01 16:59:28.212789] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.627 [2024-10-01 16:59:28.212825] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.627 [2024-10-01 16:59:28.212834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.627 [2024-10-01 16:59:28.212840] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.627 [2024-10-01 16:59:28.212846] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.627 [2024-10-01 16:59:28.212954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.627 [2024-10-01 16:59:28.213097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.627 [2024-10-01 16:59:28.213110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:36.627 [2024-10-01 16:59:28.213113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.627 [2024-10-01 16:59:28.275073] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:36.627 [2024-10-01 16:59:28.275215] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:36.627 [2024-10-01 16:59:28.275413] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:36.627 [2024-10-01 16:59:28.275589] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:36.627 [2024-10-01 16:59:28.275803] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:37.566 16:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.566 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:37.566 [2024-10-01 16:59:29.206118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.566 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.826 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:37.826 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:38.087 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:38.087 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:38.348 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:38.348 16:59:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:38.608 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:38.608 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:38.869 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:38.869 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:38.869 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:39.131 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:39.131 16:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:39.391 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:39.392 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:39.651 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:39.651 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:39.651 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:39.911 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:39.911 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:40.170 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.431 [2024-10-01 16:59:31.909870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.431 16:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:40.690 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:40.950 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:34:41.212 16:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:34:43.126 16:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:43.126 [global] 00:34:43.126 thread=1 00:34:43.126 invalidate=1 00:34:43.126 rw=write 00:34:43.126 time_based=1 00:34:43.126 runtime=1 00:34:43.126 ioengine=libaio 00:34:43.126 direct=1 00:34:43.126 bs=4096 00:34:43.126 iodepth=1 00:34:43.126 norandommap=0 00:34:43.126 numjobs=1 00:34:43.126 00:34:43.126 verify_dump=1 00:34:43.126 verify_backlog=512 00:34:43.126 verify_state_save=0 00:34:43.126 do_verify=1 00:34:43.126 verify=crc32c-intel 00:34:43.126 [job0] 00:34:43.126 filename=/dev/nvme0n1 00:34:43.126 [job1] 00:34:43.126 filename=/dev/nvme0n2 00:34:43.126 [job2] 00:34:43.126 filename=/dev/nvme0n3 00:34:43.126 [job3] 00:34:43.126 filename=/dev/nvme0n4 00:34:43.394 Could not set queue depth (nvme0n1) 00:34:43.394 Could not set queue depth (nvme0n2) 00:34:43.394 Could not set queue depth (nvme0n3) 00:34:43.394 Could not set queue depth (nvme0n4) 00:34:43.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.656 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.656 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.656 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.656 fio-3.35 00:34:43.656 Starting 4 threads 00:34:45.039 00:34:45.039 job0: (groupid=0, jobs=1): err= 0: pid=2941064: Tue Oct 1 16:59:36 2024 00:34:45.039 read: IOPS=18, BW=75.8KiB/s (77.6kB/s)(76.0KiB/1003msec) 00:34:45.039 slat (nsec): min=25174, max=26164, avg=25474.42, stdev=200.87 00:34:45.039 clat (usec): min=1080, max=42053, avg=39691.96, stdev=9354.91 00:34:45.039 lat (usec): min=1106, max=42078, avg=39717.43, stdev=9354.90 00:34:45.039 clat percentiles (usec): 00:34:45.039 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:34:45.039 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:45.039 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:45.039 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.039 | 99.99th=[42206] 00:34:45.039 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:34:45.039 slat (nsec): min=9817, max=53533, avg=29871.52, stdev=10165.59 00:34:45.039 clat (usec): min=138, max=791, avg=448.17, stdev=100.67 00:34:45.039 lat (usec): min=149, max=825, avg=478.04, stdev=104.42 00:34:45.039 clat percentiles (usec): 00:34:45.039 | 1.00th=[ 249], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 363], 00:34:45.039 | 30.00th=[ 400], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 474], 00:34:45.039 | 70.00th=[ 490], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 627], 00:34:45.039 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[ 791], 99.95th=[ 791], 00:34:45.039 | 99.99th=[ 791] 00:34:45.039 bw ( KiB/s): min= 4096, max= 4096, per=34.68%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.039 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.039 lat (usec) : 250=1.13%, 500=70.24%, 750=24.48%, 1000=0.56% 00:34:45.039 lat (msec) : 2=0.19%, 50=3.39% 00:34:45.039 cpu : usr=0.40%, sys=1.80%, ctx=535, majf=0, minf=1 00:34:45.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.039 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.040 job1: (groupid=0, jobs=1): err= 0: pid=2941065: Tue Oct 1 16:59:36 2024 00:34:45.040 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:34:45.040 slat (nsec): min=6605, max=44983, avg=21201.40, stdev=8302.51 00:34:45.040 clat (usec): min=155, max=764, avg=560.09, stdev=82.62 00:34:45.040 lat (usec): min=163, max=790, avg=581.29, stdev=86.11 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 262], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 502], 00:34:45.040 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 594], 00:34:45.040 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 644], 95.00th=[ 660], 00:34:45.040 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 725], 99.95th=[ 766], 00:34:45.040 | 99.99th=[ 766] 00:34:45.040 write: IOPS=1255, BW=5023KiB/s (5144kB/s)(5028KiB/1001msec); 0 zone resets 00:34:45.040 slat (nsec): min=9561, max=51248, avg=25142.45, stdev=10763.89 00:34:45.040 clat (usec): min=106, max=696, avg=285.05, stdev=56.32 00:34:45.040 lat (usec): min=116, max=707, avg=310.19, stdev=57.14 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 157], 5.00th=[ 188], 10.00th=[ 212], 20.00th=[ 241], 00:34:45.040 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 297], 00:34:45.040 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 375], 00:34:45.040 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 523], 99.95th=[ 693], 00:34:45.040 | 99.99th=[ 693] 00:34:45.040 bw ( KiB/s): min= 4800, max= 4800, per=40.64%, avg=4800.00, stdev= 0.00, samples=1 00:34:45.040 iops : min= 1200, max= 1200, avg=1200.00, stdev= 0.00, samples=1 00:34:45.040 lat (usec) : 250=13.28%, 500=50.59%, 750=36.08%, 1000=0.04% 00:34:45.040 cpu : usr=3.10%, sys=5.50%, ctx=2281, majf=0, minf=2 00:34:45.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 issued rwts: total=1024,1257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.040 job2: (groupid=0, jobs=1): err= 0: pid=2941068: Tue Oct 1 16:59:36 2024 00:34:45.040 read: IOPS=19, BW=77.3KiB/s (79.1kB/s)(80.0KiB/1035msec) 00:34:45.040 slat (nsec): min=8440, max=27322, avg=24838.45, stdev=5196.56 00:34:45.040 clat (usec): min=882, max=42034, avg=39893.80, stdev=9182.52 00:34:45.040 lat (usec): min=893, max=42060, avg=39918.64, stdev=9185.81 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 881], 5.00th=[ 881], 10.00th=[41681], 20.00th=[41681], 00:34:45.040 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:45.040 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:45.040 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.040 | 99.99th=[42206] 00:34:45.040 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:34:45.040 slat (nsec): min=9923, max=87328, avg=20530.57, stdev=12243.43 00:34:45.040 clat (usec): min=122, max=997, avg=437.15, stdev=164.06 00:34:45.040 lat (usec): min=133, max=1034, avg=457.68, stdev=170.41 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 129], 5.00th=[ 143], 10.00th=[ 231], 20.00th=[ 277], 00:34:45.040 | 30.00th=[ 355], 40.00th=[ 396], 50.00th=[ 441], 60.00th=[ 486], 00:34:45.040 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 685], 00:34:45.040 | 99.00th=[ 783], 99.50th=[ 906], 99.90th=[ 996], 99.95th=[ 996], 00:34:45.040 | 99.99th=[ 996] 00:34:45.040 bw ( KiB/s): min= 4096, max= 4096, per=34.68%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.040 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.040 lat (usec) : 250=12.97%, 500=48.31%, 750=32.71%, 1000=2.44% 00:34:45.040 lat (msec) : 50=3.57% 00:34:45.040 cpu : usr=0.48%, sys=1.06%, ctx=533, majf=0, minf=2 00:34:45.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.040 job3: (groupid=0, jobs=1): err= 0: pid=2941069: Tue Oct 1 16:59:36 2024 00:34:45.040 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:45.040 slat (nsec): min=7637, max=46506, avg=26660.68, stdev=2970.88 00:34:45.040 clat (usec): min=655, max=1292, avg=1041.19, stdev=91.96 00:34:45.040 lat (usec): min=681, max=1318, avg=1067.85, stdev=91.99 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 930], 20.00th=[ 988], 00:34:45.040 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1045], 60.00th=[ 1074], 00:34:45.040 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:45.040 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1287], 00:34:45.040 | 99.99th=[ 1287] 00:34:45.040 write: IOPS=774, BW=3097KiB/s (3171kB/s)(3100KiB/1001msec); 0 zone resets 00:34:45.040 slat (nsec): min=9194, max=65242, avg=29781.64, stdev=10158.99 00:34:45.040 clat (usec): min=173, max=782, avg=542.77, stdev=106.59 00:34:45.040 lat (usec): min=184, max=815, avg=572.56, stdev=111.85 00:34:45.040 clat percentiles (usec): 00:34:45.040 | 1.00th=[ 285], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 449], 00:34:45.040 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 578], 00:34:45.040 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 701], 00:34:45.040 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 783], 99.95th=[ 783], 00:34:45.040 | 99.99th=[ 783] 00:34:45.040 bw ( KiB/s): min= 4096, max= 4096, per=34.68%, avg=4096.00, stdev= 0.00, samples=1 00:34:45.040 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:45.040 lat (usec) : 250=0.31%, 500=18.34%, 750=41.65%, 1000=10.02% 00:34:45.040 lat (msec) : 2=29.68% 00:34:45.040 cpu : usr=2.80%, sys=4.60%, ctx=1287, majf=0, minf=2 00:34:45.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.040 issued rwts: total=512,775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:45.040 00:34:45.040 Run status group 0 (all jobs): 00:34:45.040 READ: bw=6087KiB/s (6233kB/s), 75.8KiB/s-4092KiB/s (77.6kB/s-4190kB/s), io=6300KiB (6451kB), run=1001-1035msec 00:34:45.040 WRITE: bw=11.5MiB/s (12.1MB/s), 1979KiB/s-5023KiB/s (2026kB/s-5144kB/s), io=11.9MiB (12.5MB), run=1001-1035msec 00:34:45.040 00:34:45.040 Disk stats (read/write): 00:34:45.040 nvme0n1: ios=66/512, merge=0/0, ticks=1210/221, in_queue=1431, util=95.59% 00:34:45.040 nvme0n2: ios=924/1024, merge=0/0, ticks=526/283, in_queue=809, util=88.09% 00:34:45.040 nvme0n3: ios=73/512, merge=0/0, ticks=1278/212, in_queue=1490, util=98.22% 00:34:45.040 nvme0n4: ios=523/512, merge=0/0, ticks=964/210, in_queue=1174, util=93.43% 00:34:45.040 16:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:45.040 [global] 00:34:45.040 thread=1 00:34:45.040 invalidate=1 00:34:45.040 rw=randwrite 00:34:45.040 time_based=1 00:34:45.040 runtime=1 00:34:45.040 ioengine=libaio 00:34:45.040 direct=1 00:34:45.040 bs=4096 00:34:45.040 iodepth=1 00:34:45.040 norandommap=0 00:34:45.040 numjobs=1 00:34:45.040 00:34:45.040 verify_dump=1 00:34:45.040 verify_backlog=512 00:34:45.041 verify_state_save=0 00:34:45.041 do_verify=1 00:34:45.041 verify=crc32c-intel 00:34:45.041 [job0] 00:34:45.041 filename=/dev/nvme0n1 00:34:45.041 [job1] 00:34:45.041 filename=/dev/nvme0n2 00:34:45.041 [job2] 00:34:45.041 filename=/dev/nvme0n3 00:34:45.041 [job3] 00:34:45.041 filename=/dev/nvme0n4 00:34:45.041 Could not set queue depth (nvme0n1) 00:34:45.041 Could not set queue depth (nvme0n2) 00:34:45.041 Could not set queue depth (nvme0n3) 00:34:45.041 Could not set queue depth (nvme0n4) 00:34:45.300 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.300 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.300 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.300 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.300 fio-3.35 00:34:45.300 Starting 4 threads 00:34:46.671 00:34:46.671 job0: (groupid=0, jobs=1): err= 0: pid=2941544: Tue Oct 1 16:59:37 2024 00:34:46.671 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:34:46.671 slat (nsec): min=26146, max=27108, avg=26506.82, stdev=276.17 00:34:46.671 clat (usec): min=40911, max=42044, avg=41490.35, stdev=477.70 00:34:46.671 lat (usec): min=40938, max=42071, avg=41516.86, stdev=477.60 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:46.671 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:46.671 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:46.671 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:46.671 | 99.99th=[42206] 00:34:46.671 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:34:46.671 slat (nsec): min=9795, max=70601, avg=31435.29, stdev=10130.50 00:34:46.671 clat (usec): min=173, max=889, avg=553.13, stdev=129.06 00:34:46.671 lat (usec): min=206, max=926, avg=584.56, stdev=130.64 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 258], 5.00th=[ 343], 10.00th=[ 396], 20.00th=[ 445], 00:34:46.671 | 30.00th=[ 486], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 586], 00:34:46.671 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 783], 00:34:46.671 | 99.00th=[ 865], 99.50th=[ 873], 99.90th=[ 889], 99.95th=[ 889], 00:34:46.671 | 99.99th=[ 889] 00:34:46.671 bw ( KiB/s): min= 4096, max= 4096, per=43.11%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.671 lat (usec) : 250=0.57%, 500=32.51%, 750=57.47%, 1000=6.24% 00:34:46.671 lat (msec) : 50=3.21% 00:34:46.671 cpu : usr=0.89%, sys=2.18%, ctx=530, majf=0, minf=1 00:34:46.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.671 job1: (groupid=0, jobs=1): err= 0: pid=2941545: Tue Oct 1 16:59:37 2024 00:34:46.671 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:34:46.671 slat (nsec): min=11099, max=29239, avg=25189.94, stdev=3726.79 00:34:46.671 clat (usec): min=40953, max=42034, avg=41640.23, stdev=445.76 00:34:46.671 lat (usec): min=40979, max=42063, avg=41665.42, stdev=446.49 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:46.671 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:46.671 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:46.671 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:46.671 | 99.99th=[42206] 00:34:46.671 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:34:46.671 slat (nsec): min=8945, max=56577, avg=26457.24, stdev=10574.29 00:34:46.671 clat (usec): min=205, max=1312, avg=550.36, stdev=111.44 00:34:46.671 lat (usec): min=218, max=1344, avg=576.81, stdev=116.47 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 251], 5.00th=[ 379], 10.00th=[ 416], 20.00th=[ 457], 00:34:46.671 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:34:46.671 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 717], 00:34:46.671 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:46.671 | 99.99th=[ 1319] 00:34:46.671 bw ( KiB/s): min= 4096, max= 4096, per=43.11%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.671 lat (usec) : 250=0.95%, 500=30.62%, 750=63.89%, 1000=1.13% 00:34:46.671 lat (msec) : 2=0.19%, 50=3.21% 00:34:46.671 cpu : usr=1.09%, sys=1.49%, ctx=529, majf=0, minf=1 00:34:46.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.671 job2: (groupid=0, jobs=1): err= 0: pid=2941546: Tue Oct 1 16:59:37 2024 00:34:46.671 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:46.671 slat (nsec): min=6512, max=59819, avg=26926.54, stdev=3541.39 00:34:46.671 clat (usec): min=510, max=1178, avg=930.83, stdev=75.26 00:34:46.671 lat (usec): min=517, max=1204, avg=957.76, stdev=76.19 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 545], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 906], 00:34:46.671 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 938], 60.00th=[ 947], 00:34:46.671 | 70.00th=[ 963], 80.00th=[ 971], 90.00th=[ 1004], 95.00th=[ 1029], 00:34:46.671 | 99.00th=[ 1106], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:34:46.671 | 99.99th=[ 1172] 00:34:46.671 write: IOPS=893, BW=3572KiB/s (3658kB/s)(3576KiB/1001msec); 0 zone resets 00:34:46.671 slat (nsec): min=9004, max=66359, avg=29747.47, stdev=9118.18 00:34:46.671 clat (usec): min=229, max=795, avg=527.60, stdev=110.35 00:34:46.671 lat (usec): min=247, max=828, avg=557.35, stdev=113.77 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 269], 5.00th=[ 343], 10.00th=[ 375], 20.00th=[ 437], 00:34:46.671 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 562], 00:34:46.671 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 709], 00:34:46.671 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 799], 99.95th=[ 799], 00:34:46.671 | 99.99th=[ 799] 00:34:46.671 bw ( KiB/s): min= 4096, max= 4096, per=43.11%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.671 lat (usec) : 250=0.28%, 500=25.25%, 750=37.70%, 1000=33.14% 00:34:46.671 lat (msec) : 2=3.63% 00:34:46.671 cpu : usr=4.50%, sys=3.90%, ctx=1407, majf=0, minf=1 00:34:46.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 issued rwts: total=512,894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.671 job3: (groupid=0, jobs=1): err= 0: pid=2941547: Tue Oct 1 16:59:37 2024 00:34:46.671 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:34:46.671 slat (nsec): min=27375, max=28517, avg=27799.78, stdev=293.45 00:34:46.671 clat (usec): min=997, max=42052, avg=39666.17, stdev=9650.76 00:34:46.671 lat (usec): min=1025, max=42080, avg=39693.97, stdev=9650.78 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 996], 5.00th=[ 996], 10.00th=[41681], 20.00th=[41681], 00:34:46.671 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:46.671 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:46.671 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:46.671 | 99.99th=[42206] 00:34:46.671 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:34:46.671 slat (nsec): min=9217, max=63298, avg=31042.64, stdev=9698.81 00:34:46.671 clat (usec): min=315, max=856, avg=561.96, stdev=103.89 00:34:46.671 lat (usec): min=327, max=891, avg=593.00, stdev=107.06 00:34:46.671 clat percentiles (usec): 00:34:46.671 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 416], 20.00th=[ 474], 00:34:46.671 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 594], 00:34:46.671 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 725], 00:34:46.671 | 99.00th=[ 775], 99.50th=[ 807], 99.90th=[ 857], 99.95th=[ 857], 00:34:46.671 | 99.99th=[ 857] 00:34:46.671 bw ( KiB/s): min= 4096, max= 4096, per=43.11%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.671 lat (usec) : 500=26.23%, 750=67.36%, 1000=3.21% 00:34:46.671 lat (msec) : 50=3.21% 00:34:46.671 cpu : usr=1.17%, sys=1.86%, ctx=532, majf=0, minf=2 00:34:46.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.671 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.671 00:34:46.671 Run status group 0 (all jobs): 00:34:46.671 READ: bw=2205KiB/s (2258kB/s), 67.3KiB/s-2046KiB/s (68.9kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1023msec 00:34:46.671 WRITE: bw=9501KiB/s (9730kB/s), 2002KiB/s-3572KiB/s (2050kB/s-3658kB/s), io=9720KiB (9953kB), run=1001-1023msec 00:34:46.671 00:34:46.671 Disk stats (read/write): 00:34:46.671 nvme0n1: ios=63/512, merge=0/0, ticks=598/252, in_queue=850, util=89.18% 00:34:46.671 nvme0n2: ios=40/512, merge=0/0, ticks=620/216, in_queue=836, util=90.65% 00:34:46.671 nvme0n3: ios=569/636, merge=0/0, ticks=556/254, in_queue=810, util=93.10% 00:34:46.671 nvme0n4: ios=56/512, merge=0/0, ticks=1716/224, in_queue=1940, util=96.41% 00:34:46.671 16:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:46.671 [global] 00:34:46.671 thread=1 00:34:46.671 invalidate=1 00:34:46.671 rw=write 00:34:46.671 time_based=1 00:34:46.671 runtime=1 00:34:46.671 ioengine=libaio 00:34:46.671 direct=1 00:34:46.671 bs=4096 00:34:46.671 iodepth=128 00:34:46.671 norandommap=0 00:34:46.671 numjobs=1 00:34:46.671 00:34:46.671 verify_dump=1 00:34:46.671 verify_backlog=512 00:34:46.671 verify_state_save=0 00:34:46.671 do_verify=1 00:34:46.671 verify=crc32c-intel 00:34:46.671 [job0] 00:34:46.671 filename=/dev/nvme0n1 00:34:46.671 [job1] 00:34:46.671 filename=/dev/nvme0n2 00:34:46.671 [job2] 00:34:46.671 filename=/dev/nvme0n3 00:34:46.671 [job3] 00:34:46.671 filename=/dev/nvme0n4 00:34:46.671 Could not set queue depth (nvme0n1) 00:34:46.671 Could not set queue depth (nvme0n2) 00:34:46.671 Could not set queue depth (nvme0n3) 00:34:46.671 Could not set queue depth (nvme0n4) 00:34:46.977 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.977 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.977 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.977 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.977 fio-3.35 00:34:46.977 Starting 4 threads 00:34:47.987 00:34:47.988 job0: (groupid=0, jobs=1): err= 0: pid=2941968: Tue Oct 1 16:59:39 2024 00:34:47.988 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:34:47.988 slat (nsec): min=1318, max=13094k, avg=91967.42, stdev=688500.61 00:34:47.988 clat (usec): min=4513, max=27146, avg=12119.98, stdev=3858.95 00:34:47.988 lat (usec): min=4515, max=27151, avg=12211.95, stdev=3897.56 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7504], 20.00th=[ 8848], 00:34:47.988 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11338], 60.00th=[12518], 00:34:47.988 | 70.00th=[13304], 80.00th=[15270], 90.00th=[17433], 95.00th=[19006], 00:34:47.988 | 99.00th=[23987], 99.50th=[23987], 99.90th=[25560], 99.95th=[26870], 00:34:47.988 | 99.99th=[27132] 00:34:47.988 write: IOPS=5485, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1005msec); 0 zone resets 00:34:47.988 slat (usec): min=2, max=29651, avg=88.57, stdev=699.52 00:34:47.988 clat (usec): min=2654, max=43471, avg=11033.68, stdev=5363.51 00:34:47.988 lat (usec): min=3106, max=48658, avg=11122.25, stdev=5413.48 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 4047], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7439], 00:34:47.988 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:34:47.988 | 70.00th=[11600], 80.00th=[13566], 90.00th=[15926], 95.00th=[20317], 00:34:47.988 | 99.00th=[34866], 99.50th=[36439], 99.90th=[43254], 99.95th=[43254], 00:34:47.988 | 99.99th=[43254] 00:34:47.988 bw ( KiB/s): min=20456, max=22624, per=25.96%, avg=21540.00, stdev=1533.01, samples=2 00:34:47.988 iops : min= 5114, max= 5656, avg=5385.00, stdev=383.25, samples=2 00:34:47.988 lat (msec) : 4=0.49%, 10=42.24%, 20=53.16%, 50=4.12% 00:34:47.988 cpu : usr=3.69%, sys=5.38%, ctx=354, majf=0, minf=1 00:34:47.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:47.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.988 issued rwts: total=5120,5513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.988 job1: (groupid=0, jobs=1): err= 0: pid=2941984: Tue Oct 1 16:59:39 2024 00:34:47.988 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:34:47.988 slat (nsec): min=1210, max=13074k, avg=81854.03, stdev=640299.00 00:34:47.988 clat (usec): min=2676, max=43193, avg=11073.78, stdev=6909.42 00:34:47.988 lat (usec): min=2677, max=49652, avg=11155.63, stdev=6954.63 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 3752], 5.00th=[ 4359], 10.00th=[ 5473], 20.00th=[ 6063], 00:34:47.988 | 30.00th=[ 7046], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 9634], 00:34:47.988 | 70.00th=[11731], 80.00th=[15008], 90.00th=[21365], 95.00th=[25297], 00:34:47.988 | 99.00th=[35390], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:34:47.988 | 99.99th=[43254] 00:34:47.988 write: IOPS=5858, BW=22.9MiB/s (24.0MB/s)(23.1MiB/1009msec); 0 zone resets 00:34:47.988 slat (usec): min=2, max=13629, avg=83.36, stdev=570.68 00:34:47.988 clat (usec): min=558, max=74270, avg=11098.92, stdev=10621.17 00:34:47.988 lat (usec): min=566, max=74278, avg=11182.28, stdev=10696.83 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 2507], 5.00th=[ 4047], 10.00th=[ 4948], 20.00th=[ 5604], 00:34:47.988 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8094], 00:34:47.988 | 70.00th=[ 9634], 80.00th=[14877], 90.00th=[20841], 95.00th=[25297], 00:34:47.988 | 99.00th=[63177], 99.50th=[66323], 99.90th=[73925], 99.95th=[73925], 00:34:47.988 | 99.99th=[73925] 00:34:47.988 bw ( KiB/s): min=14072, max=32200, per=27.88%, avg=23136.00, stdev=12818.43, samples=2 00:34:47.988 iops : min= 3518, max= 8050, avg=5784.00, stdev=3204.61, samples=2 00:34:47.988 lat (usec) : 750=0.03% 00:34:47.988 lat (msec) : 2=0.33%, 4=3.34%, 10=63.35%, 20=21.84%, 50=9.73% 00:34:47.988 lat (msec) : 100=1.38% 00:34:47.988 cpu : usr=4.17%, sys=5.85%, ctx=499, majf=0, minf=1 00:34:47.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.988 issued rwts: total=5632,5911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.988 job2: (groupid=0, jobs=1): err= 0: pid=2942002: Tue Oct 1 16:59:39 2024 00:34:47.988 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:34:47.988 slat (nsec): min=1292, max=17186k, avg=81451.80, stdev=673623.55 00:34:47.988 clat (usec): min=2679, max=48145, avg=10946.71, stdev=5837.13 00:34:47.988 lat (usec): min=2697, max=48151, avg=11028.16, stdev=5879.62 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 4293], 5.00th=[ 5473], 10.00th=[ 6325], 20.00th=[ 7242], 00:34:47.988 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[10028], 00:34:47.988 | 70.00th=[11207], 80.00th=[13566], 90.00th=[18744], 95.00th=[22676], 00:34:47.988 | 99.00th=[30016], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:34:47.988 | 99.99th=[47973] 00:34:47.988 write: IOPS=6206, BW=24.2MiB/s (25.4MB/s)(24.4MiB/1005msec); 0 zone resets 00:34:47.988 slat (usec): min=2, max=10803, avg=71.35, stdev=520.39 00:34:47.988 clat (usec): min=623, max=60112, avg=9658.80, stdev=5945.98 00:34:47.988 lat (usec): min=657, max=60115, avg=9730.15, stdev=5976.39 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 1336], 5.00th=[ 4047], 10.00th=[ 5080], 20.00th=[ 5997], 00:34:47.988 | 30.00th=[ 6652], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 9110], 00:34:47.988 | 70.00th=[10159], 80.00th=[12387], 90.00th=[15139], 95.00th=[19006], 00:34:47.988 | 99.00th=[35390], 99.50th=[43254], 99.90th=[60031], 99.95th=[60031], 00:34:47.988 | 99.99th=[60031] 00:34:47.988 bw ( KiB/s): min=20480, max=28672, per=29.61%, avg=24576.00, stdev=5792.62, samples=2 00:34:47.988 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:34:47.988 lat (usec) : 750=0.02%, 1000=0.13% 00:34:47.988 lat (msec) : 2=0.53%, 4=1.98%, 10=61.24%, 20=30.07%, 50=5.78% 00:34:47.988 lat (msec) : 100=0.24% 00:34:47.988 cpu : usr=3.98%, sys=7.17%, ctx=425, majf=0, minf=2 00:34:47.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.988 issued rwts: total=6144,6238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.988 job3: (groupid=0, jobs=1): err= 0: pid=2942007: Tue Oct 1 16:59:39 2024 00:34:47.988 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:34:47.988 slat (nsec): min=1312, max=22196k, avg=156879.88, stdev=1099805.04 00:34:47.988 clat (usec): min=6277, max=54024, avg=19225.43, stdev=8405.36 00:34:47.988 lat (usec): min=6284, max=54032, avg=19382.31, stdev=8475.78 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[12387], 00:34:47.988 | 30.00th=[14615], 40.00th=[16450], 50.00th=[17695], 60.00th=[18220], 00:34:47.988 | 70.00th=[22414], 80.00th=[24773], 90.00th=[30802], 95.00th=[36439], 00:34:47.988 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:34:47.988 | 99.99th=[54264] 00:34:47.988 write: IOPS=3241, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1009msec); 0 zone resets 00:34:47.988 slat (usec): min=2, max=16692, avg=152.51, stdev=1082.51 00:34:47.988 clat (usec): min=4622, max=74078, avg=20910.14, stdev=11304.13 00:34:47.988 lat (usec): min=4633, max=74089, avg=21062.65, stdev=11397.67 00:34:47.988 clat percentiles (usec): 00:34:47.988 | 1.00th=[ 5669], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[11207], 00:34:47.988 | 30.00th=[13698], 40.00th=[16712], 50.00th=[20579], 60.00th=[21365], 00:34:47.988 | 70.00th=[25035], 80.00th=[28705], 90.00th=[32900], 95.00th=[40109], 00:34:47.988 | 99.00th=[63177], 99.50th=[66847], 99.90th=[73925], 99.95th=[73925], 00:34:47.988 | 99.99th=[73925] 00:34:47.988 bw ( KiB/s): min= 9400, max=15752, per=15.15%, avg=12576.00, stdev=4491.54, samples=2 00:34:47.988 iops : min= 2350, max= 3938, avg=3144.00, stdev=1122.89, samples=2 00:34:47.988 lat (msec) : 10=12.57%, 20=44.13%, 50=41.29%, 100=2.02% 00:34:47.988 cpu : usr=3.57%, sys=2.68%, ctx=240, majf=0, minf=1 00:34:47.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:47.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.988 issued rwts: total=3072,3271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.988 00:34:47.988 Run status group 0 (all jobs): 00:34:47.988 READ: bw=77.3MiB/s (81.1MB/s), 11.9MiB/s-23.9MiB/s (12.5MB/s-25.0MB/s), io=78.0MiB (81.8MB), run=1005-1009msec 00:34:47.988 WRITE: bw=81.0MiB/s (85.0MB/s), 12.7MiB/s-24.2MiB/s (13.3MB/s-25.4MB/s), io=81.8MiB (85.7MB), run=1005-1009msec 00:34:47.988 00:34:47.988 Disk stats (read/write): 00:34:47.988 nvme0n1: ios=4137/4334, merge=0/0, ticks=47663/45124, in_queue=92787, util=99.90% 00:34:47.988 nvme0n2: ios=5401/5632, merge=0/0, ticks=26328/21735, in_queue=48063, util=96.85% 00:34:47.988 nvme0n3: ios=4801/5120, merge=0/0, ticks=36504/35902, in_queue=72406, util=96.55% 00:34:47.988 nvme0n4: ios=2617/3072, merge=0/0, ticks=23445/28515, in_queue=51960, util=97.89% 00:34:47.988 16:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:47.988 [global] 00:34:47.988 thread=1 00:34:47.988 invalidate=1 00:34:47.988 rw=randwrite 00:34:47.988 time_based=1 00:34:47.988 runtime=1 00:34:47.988 ioengine=libaio 00:34:47.988 direct=1 00:34:47.988 bs=4096 00:34:47.988 iodepth=128 00:34:47.988 norandommap=0 00:34:47.988 numjobs=1 00:34:47.988 00:34:47.989 verify_dump=1 00:34:47.989 verify_backlog=512 00:34:47.989 verify_state_save=0 00:34:47.989 do_verify=1 00:34:47.989 verify=crc32c-intel 00:34:47.989 [job0] 00:34:47.989 filename=/dev/nvme0n1 00:34:47.989 [job1] 00:34:47.989 filename=/dev/nvme0n2 00:34:47.989 [job2] 00:34:47.989 filename=/dev/nvme0n3 00:34:47.989 [job3] 00:34:47.989 filename=/dev/nvme0n4 00:34:48.246 Could not set queue depth (nvme0n1) 00:34:48.246 Could not set queue depth (nvme0n2) 00:34:48.246 Could not set queue depth (nvme0n3) 00:34:48.246 Could not set queue depth (nvme0n4) 00:34:48.505 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.505 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.505 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.505 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.505 fio-3.35 00:34:48.505 Starting 4 threads 00:34:49.886 00:34:49.886 job0: (groupid=0, jobs=1): err= 0: pid=2942367: Tue Oct 1 16:59:41 2024 00:34:49.886 read: IOPS=5578, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1009msec) 00:34:49.886 slat (nsec): min=1226, max=18490k, avg=91342.42, stdev=629187.63 00:34:49.886 clat (usec): min=4910, max=41771, avg=11711.99, stdev=5510.05 00:34:49.886 lat (usec): min=4918, max=49128, avg=11803.33, stdev=5565.68 00:34:49.886 clat percentiles (usec): 00:34:49.886 | 1.00th=[ 6980], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 8717], 00:34:49.886 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10159], 00:34:49.886 | 70.00th=[10683], 80.00th=[12387], 90.00th=[18482], 95.00th=[24773], 00:34:49.886 | 99.00th=[31851], 99.50th=[31851], 99.90th=[35914], 99.95th=[40109], 00:34:49.886 | 99.99th=[41681] 00:34:49.886 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:34:49.886 slat (usec): min=2, max=36445, avg=81.39, stdev=699.46 00:34:49.886 clat (usec): min=701, max=62328, avg=10995.14, stdev=6977.62 00:34:49.886 lat (usec): min=709, max=62336, avg=11076.53, stdev=7022.96 00:34:49.886 clat percentiles (usec): 00:34:49.886 | 1.00th=[ 4948], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8455], 00:34:49.886 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:34:49.886 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[13960], 95.00th=[25822], 00:34:49.886 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[52167], 00:34:49.886 | 99.99th=[62129] 00:34:49.886 bw ( KiB/s): min=20480, max=24576, per=24.40%, avg=22528.00, stdev=2896.31, samples=2 00:34:49.886 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:49.886 lat (usec) : 750=0.03% 00:34:49.886 lat (msec) : 2=0.08%, 10=65.60%, 20=25.89%, 50=8.37%, 100=0.04% 00:34:49.886 cpu : usr=3.77%, sys=3.97%, ctx=480, majf=0, minf=1 00:34:49.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:49.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.887 issued rwts: total=5629,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.887 job1: (groupid=0, jobs=1): err= 0: pid=2942370: Tue Oct 1 16:59:41 2024 00:34:49.887 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:34:49.887 slat (nsec): min=1038, max=12462k, avg=74422.15, stdev=539995.79 00:34:49.887 clat (usec): min=2438, max=46146, avg=10382.46, stdev=5923.63 00:34:49.887 lat (usec): min=2445, max=46172, avg=10456.89, stdev=5969.67 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6980], 00:34:49.887 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8848], 00:34:49.887 | 70.00th=[10814], 80.00th=[11731], 90.00th=[17695], 95.00th=[24773], 00:34:49.887 | 99.00th=[35914], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:34:49.887 | 99.99th=[46400] 00:34:49.887 write: IOPS=6557, BW=25.6MiB/s (26.9MB/s)(25.7MiB/1003msec); 0 zone resets 00:34:49.887 slat (nsec): min=1970, max=14951k, avg=72717.67, stdev=582678.24 00:34:49.887 clat (usec): min=806, max=47476, avg=9561.13, stdev=6538.10 00:34:49.887 lat (usec): min=914, max=47508, avg=9633.85, stdev=6598.19 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 2507], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6194], 00:34:49.887 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 8225], 00:34:49.887 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[16057], 95.00th=[27132], 00:34:49.887 | 99.00th=[36439], 99.50th=[38536], 99.90th=[38536], 99.95th=[43779], 00:34:49.887 | 99.99th=[47449] 00:34:49.887 bw ( KiB/s): min=24000, max=27592, per=27.94%, avg=25796.00, stdev=2539.93, samples=2 00:34:49.887 iops : min= 6000, max= 6898, avg=6449.00, stdev=634.98, samples=2 00:34:49.887 lat (usec) : 1000=0.03% 00:34:49.887 lat (msec) : 2=0.29%, 4=1.51%, 10=71.94%, 20=18.46%, 50=7.77% 00:34:49.887 cpu : usr=5.59%, sys=3.69%, ctx=577, majf=0, minf=1 00:34:49.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:49.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.887 issued rwts: total=6144,6577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.887 job2: (groupid=0, jobs=1): err= 0: pid=2942386: Tue Oct 1 16:59:41 2024 00:34:49.887 read: IOPS=5168, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1006msec) 00:34:49.887 slat (nsec): min=1260, max=10553k, avg=90990.18, stdev=626717.12 00:34:49.887 clat (usec): min=3084, max=38768, avg=10840.15, stdev=3978.24 00:34:49.887 lat (usec): min=4797, max=38772, avg=10931.14, stdev=4023.83 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8455], 00:34:49.887 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10814], 00:34:49.887 | 70.00th=[11338], 80.00th=[12125], 90.00th=[15139], 95.00th=[17957], 00:34:49.887 | 99.00th=[28443], 99.50th=[32637], 99.90th=[36963], 99.95th=[38536], 00:34:49.887 | 99.99th=[38536] 00:34:49.887 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:34:49.887 slat (usec): min=2, max=9299, avg=87.89, stdev=512.78 00:34:49.887 clat (usec): min=785, max=42965, avg=12626.32, stdev=8226.50 00:34:49.887 lat (usec): min=794, max=42974, avg=12714.22, stdev=8283.60 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 3359], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 7111], 00:34:49.887 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[10683], 00:34:49.887 | 70.00th=[12649], 80.00th=[18220], 90.00th=[23725], 95.00th=[33424], 00:34:49.887 | 99.00th=[40109], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:34:49.887 | 99.99th=[42730] 00:34:49.887 bw ( KiB/s): min=20656, max=24016, per=24.19%, avg=22336.00, stdev=2375.88, samples=2 00:34:49.887 iops : min= 5164, max= 6004, avg=5584.00, stdev=593.97, samples=2 00:34:49.887 lat (usec) : 1000=0.08% 00:34:49.887 lat (msec) : 2=0.06%, 4=0.55%, 10=51.63%, 20=38.09%, 50=9.58% 00:34:49.887 cpu : usr=2.69%, sys=6.77%, ctx=477, majf=0, minf=2 00:34:49.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:49.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.887 issued rwts: total=5200,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.887 job3: (groupid=0, jobs=1): err= 0: pid=2942392: Tue Oct 1 16:59:41 2024 00:34:49.887 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:34:49.887 slat (nsec): min=1045, max=14078k, avg=98155.20, stdev=741040.69 00:34:49.887 clat (usec): min=4141, max=56463, avg=12091.28, stdev=5257.45 00:34:49.887 lat (usec): min=4149, max=56512, avg=12189.43, stdev=5324.70 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 5342], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[ 8848], 00:34:49.887 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11863], 00:34:49.887 | 70.00th=[12780], 80.00th=[14222], 90.00th=[16057], 95.00th=[18482], 00:34:49.887 | 99.00th=[36963], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:34:49.887 | 99.99th=[56361] 00:34:49.887 write: IOPS=5402, BW=21.1MiB/s (22.1MB/s)(21.3MiB/1008msec); 0 zone resets 00:34:49.887 slat (nsec): min=1908, max=14993k, avg=86163.66, stdev=616559.37 00:34:49.887 clat (usec): min=3100, max=55846, avg=12021.82, stdev=6897.22 00:34:49.887 lat (usec): min=3111, max=55871, avg=12107.98, stdev=6951.20 00:34:49.887 clat percentiles (usec): 00:34:49.887 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 7832], 20.00th=[ 9110], 00:34:49.887 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:34:49.887 | 70.00th=[10945], 80.00th=[13960], 90.00th=[16712], 95.00th=[31065], 00:34:49.887 | 99.00th=[43779], 99.50th=[43779], 99.90th=[46400], 99.95th=[49021], 00:34:49.887 | 99.99th=[55837] 00:34:49.887 bw ( KiB/s): min=20480, max=22064, per=23.04%, avg=21272.00, stdev=1120.06, samples=2 00:34:49.887 iops : min= 5120, max= 5516, avg=5318.00, stdev=280.01, samples=2 00:34:49.887 lat (msec) : 4=0.20%, 10=45.49%, 20=48.75%, 50=5.53%, 100=0.03% 00:34:49.887 cpu : usr=2.98%, sys=6.75%, ctx=428, majf=0, minf=1 00:34:49.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:49.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.887 issued rwts: total=5120,5446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.887 00:34:49.887 Run status group 0 (all jobs): 00:34:49.887 READ: bw=85.5MiB/s (89.7MB/s), 19.8MiB/s-23.9MiB/s (20.8MB/s-25.1MB/s), io=86.3MiB (90.5MB), run=1003-1009msec 00:34:49.887 WRITE: bw=90.2MiB/s (94.5MB/s), 21.1MiB/s-25.6MiB/s (22.1MB/s-26.9MB/s), io=91.0MiB (95.4MB), run=1003-1009msec 00:34:49.887 00:34:49.887 Disk stats (read/write): 00:34:49.887 nvme0n1: ios=4642/5048, merge=0/0, ticks=19610/18047, in_queue=37657, util=99.80% 00:34:49.887 nvme0n2: ios=4963/5120, merge=0/0, ticks=21956/20038, in_queue=41994, util=87.91% 00:34:49.887 nvme0n3: ios=4141/4557, merge=0/0, ticks=41615/59469, in_queue=101084, util=91.11% 00:34:49.887 nvme0n4: ios=4260/4608, merge=0/0, ticks=40736/37265, in_queue=78001, util=95.56% 00:34:49.887 16:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:49.887 16:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2942516 00:34:49.887 16:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:49.887 16:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:49.887 [global] 00:34:49.887 thread=1 00:34:49.887 invalidate=1 00:34:49.887 rw=read 00:34:49.887 time_based=1 00:34:49.887 runtime=10 00:34:49.887 ioengine=libaio 00:34:49.887 direct=1 00:34:49.887 bs=4096 00:34:49.887 iodepth=1 00:34:49.887 norandommap=1 00:34:49.887 numjobs=1 00:34:49.887 00:34:49.887 [job0] 00:34:49.887 filename=/dev/nvme0n1 00:34:49.887 [job1] 00:34:49.887 filename=/dev/nvme0n2 00:34:49.887 [job2] 00:34:49.887 filename=/dev/nvme0n3 00:34:49.887 [job3] 00:34:49.887 filename=/dev/nvme0n4 00:34:49.887 Could not set queue depth (nvme0n1) 00:34:49.887 Could not set queue depth (nvme0n2) 00:34:49.887 Could not set queue depth (nvme0n3) 00:34:49.887 Could not set queue depth (nvme0n4) 00:34:50.147 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.147 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.147 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.147 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.147 fio-3.35 00:34:50.147 Starting 4 threads 00:34:52.685 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:52.945 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=782336, buflen=4096 00:34:52.945 fio: pid=2942714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:52.945 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:53.204 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.204 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:53.204 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=688128, buflen=4096 00:34:53.204 fio: pid=2942713, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.204 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.204 16:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:53.464 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5857280, buflen=4096 00:34:53.464 fio: pid=2942705, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.464 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.464 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:53.464 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=450560, buflen=4096 00:34:53.464 fio: pid=2942707, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.464 00:34:53.464 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2942705: Tue Oct 1 16:59:45 2024 00:34:53.464 read: IOPS=454, BW=1816KiB/s (1860kB/s)(5720KiB/3149msec) 00:34:53.464 slat (usec): min=6, max=22522, avg=45.95, stdev=622.85 00:34:53.464 clat (usec): min=475, max=41492, avg=2134.34, stdev=6927.02 00:34:53.464 lat (usec): min=502, max=41517, avg=2180.17, stdev=6951.38 00:34:53.464 clat percentiles (usec): 00:34:53.464 | 1.00th=[ 668], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 848], 00:34:53.464 | 30.00th=[ 898], 40.00th=[ 906], 50.00th=[ 914], 60.00th=[ 922], 00:34:53.464 | 70.00th=[ 930], 80.00th=[ 947], 90.00th=[ 971], 95.00th=[ 1012], 00:34:53.464 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:34:53.464 | 99.99th=[41681] 00:34:53.464 bw ( KiB/s): min= 96, max= 4232, per=82.13%, avg=1859.67, stdev=1621.97, samples=6 00:34:53.464 iops : min= 24, max= 1058, avg=464.83, stdev=405.52, samples=6 00:34:53.464 lat (usec) : 500=0.07%, 750=4.05%, 1000=90.01% 00:34:53.464 lat (msec) : 2=2.66%, 4=0.07%, 50=3.07% 00:34:53.464 cpu : usr=0.92%, sys=1.52%, ctx=1434, majf=0, minf=2 00:34:53.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.464 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2942707: Tue Oct 1 16:59:45 2024 00:34:53.464 read: IOPS=33, BW=131KiB/s (134kB/s)(440KiB/3356msec) 00:34:53.464 slat (usec): min=6, max=9608, avg=111.23, stdev=909.65 00:34:53.464 clat (usec): min=688, max=42094, avg=30194.02, stdev=18442.09 00:34:53.464 lat (usec): min=696, max=51036, avg=30306.03, stdev=18519.99 00:34:53.464 clat percentiles (usec): 00:34:53.464 | 1.00th=[ 742], 5.00th=[ 783], 10.00th=[ 906], 20.00th=[ 947], 00:34:53.464 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:53.464 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:53.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:53.464 | 99.99th=[42206] 00:34:53.464 bw ( KiB/s): min= 88, max= 336, per=5.96%, avg=135.67, stdev=98.28, samples=6 00:34:53.464 iops : min= 22, max= 84, avg=33.83, stdev=24.61, samples=6 00:34:53.464 lat (usec) : 750=2.70%, 1000=24.32% 00:34:53.464 lat (msec) : 2=0.90%, 50=71.17% 00:34:53.464 cpu : usr=0.18%, sys=0.00%, ctx=113, majf=0, minf=1 00:34:53.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.464 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2942713: Tue Oct 1 16:59:45 2024 00:34:53.464 read: IOPS=57, BW=230KiB/s (236kB/s)(672KiB/2917msec) 00:34:53.464 slat (usec): min=6, max=8599, avg=76.42, stdev=659.54 00:34:53.464 clat (usec): min=578, max=42039, avg=17126.84, stdev=19966.47 00:34:53.464 lat (usec): min=621, max=49995, avg=17203.56, stdev=20040.04 00:34:53.464 clat percentiles (usec): 00:34:53.464 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 840], 20.00th=[ 914], 00:34:53.464 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1303], 00:34:53.464 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:53.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:53.464 | 99.99th=[42206] 00:34:53.464 bw ( KiB/s): min= 88, max= 880, per=11.13%, avg=252.80, stdev=350.66, samples=5 00:34:53.464 iops : min= 22, max= 220, avg=63.20, stdev=87.67, samples=5 00:34:53.464 lat (usec) : 750=4.14%, 1000=50.89% 00:34:53.464 lat (msec) : 2=4.73%, 50=39.64% 00:34:53.464 cpu : usr=0.21%, sys=0.10%, ctx=170, majf=0, minf=2 00:34:53.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 issued rwts: total=169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.464 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2942714: Tue Oct 1 16:59:45 2024 00:34:53.464 read: IOPS=71, BW=283KiB/s (290kB/s)(764KiB/2696msec) 00:34:53.464 slat (nsec): min=6343, max=42736, avg=23478.61, stdev=7007.76 00:34:53.464 clat (usec): min=651, max=42249, avg=13945.73, stdev=19102.28 00:34:53.464 lat (usec): min=658, max=42275, avg=13969.20, stdev=19103.79 00:34:53.464 clat percentiles (usec): 00:34:53.464 | 1.00th=[ 701], 5.00th=[ 750], 10.00th=[ 791], 20.00th=[ 865], 00:34:53.464 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 979], 00:34:53.464 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:53.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:53.464 | 99.99th=[42206] 00:34:53.464 bw ( KiB/s): min= 96, max= 760, per=13.21%, avg=299.20, stdev=299.31, samples=5 00:34:53.464 iops : min= 24, max= 190, avg=74.80, stdev=74.83, samples=5 00:34:53.464 lat (usec) : 750=6.25%, 1000=55.73% 00:34:53.464 lat (msec) : 2=5.73%, 50=31.77% 00:34:53.464 cpu : usr=0.00%, sys=0.33%, ctx=192, majf=0, minf=2 00:34:53.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.464 issued rwts: total=192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.464 00:34:53.464 Run status group 0 (all jobs): 00:34:53.464 READ: bw=2263KiB/s (2318kB/s), 131KiB/s-1816KiB/s (134kB/s-1860kB/s), io=7596KiB (7778kB), run=2696-3356msec 00:34:53.464 00:34:53.464 Disk stats (read/write): 00:34:53.464 nvme0n1: ios=1429/0, merge=0/0, ticks=2923/0, in_queue=2923, util=94.61% 00:34:53.464 nvme0n2: ios=111/0, merge=0/0, ticks=3333/0, in_queue=3333, util=95.83% 00:34:53.464 nvme0n3: ios=166/0, merge=0/0, ticks=2779/0, in_queue=2779, util=96.17% 00:34:53.464 nvme0n4: ios=189/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.40% 00:34:53.724 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.724 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:53.984 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.984 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:54.245 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.245 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:54.245 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.245 16:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2942516 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:54.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:54.504 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:54.765 nvmf hotplug test: fio failed as expected 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.765 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.765 rmmod nvme_tcp 00:34:54.765 rmmod nvme_fabrics 00:34:54.765 rmmod nvme_keyring 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2939619 ']' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2939619 ']' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2939619' 00:34:55.027 killing process with pid 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2939619 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.027 16:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:57.574 00:34:57.574 real 0m28.013s 00:34:57.574 user 1m49.166s 00:34:57.574 sys 0m11.770s 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:57.574 ************************************ 00:34:57.574 END TEST nvmf_fio_target 00:34:57.574 ************************************ 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:57.574 ************************************ 00:34:57.574 START TEST nvmf_bdevio 00:34:57.574 ************************************ 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:57.574 * Looking for test storage... 00:34:57.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:57.574 16:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.574 --rc genhtml_branch_coverage=1 00:34:57.574 --rc genhtml_function_coverage=1 00:34:57.574 --rc genhtml_legend=1 00:34:57.574 --rc geninfo_all_blocks=1 00:34:57.574 --rc geninfo_unexecuted_blocks=1 00:34:57.574 00:34:57.574 ' 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.574 --rc genhtml_branch_coverage=1 00:34:57.574 --rc genhtml_function_coverage=1 00:34:57.574 --rc genhtml_legend=1 00:34:57.574 --rc geninfo_all_blocks=1 00:34:57.574 --rc geninfo_unexecuted_blocks=1 00:34:57.574 00:34:57.574 ' 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.574 --rc genhtml_branch_coverage=1 00:34:57.574 --rc genhtml_function_coverage=1 00:34:57.574 --rc genhtml_legend=1 00:34:57.574 --rc geninfo_all_blocks=1 00:34:57.574 --rc geninfo_unexecuted_blocks=1 00:34:57.574 00:34:57.574 ' 00:34:57.574 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:57.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:57.574 --rc genhtml_branch_coverage=1 00:34:57.574 --rc genhtml_function_coverage=1 00:34:57.574 --rc genhtml_legend=1 00:34:57.574 --rc geninfo_all_blocks=1 00:34:57.574 --rc geninfo_unexecuted_blocks=1 00:34:57.574 00:34:57.574 ' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:57.575 16:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:05.712 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:05.712 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:05.712 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:05.712 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.712 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.713 16:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:05.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:35:05.713 00:35:05.713 --- 10.0.0.2 ping statistics --- 00:35:05.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.713 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:35:05.713 00:35:05.713 --- 10.0.0.1 ping statistics --- 00:35:05.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.713 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2947528 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2947528 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2947528 ']' 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:05.713 16:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 [2024-10-01 16:59:56.378943] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:05.713 [2024-10-01 16:59:56.380192] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:35:05.713 [2024-10-01 16:59:56.380244] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.713 [2024-10-01 16:59:56.437583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.713 [2024-10-01 16:59:56.491139] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.713 [2024-10-01 16:59:56.491171] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.713 [2024-10-01 16:59:56.491177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.713 [2024-10-01 16:59:56.491182] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.713 [2024-10-01 16:59:56.491186] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.713 [2024-10-01 16:59:56.491285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:35:05.713 [2024-10-01 16:59:56.491436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:35:05.713 [2024-10-01 16:59:56.491582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.713 [2024-10-01 16:59:56.491584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:35:05.713 [2024-10-01 16:59:56.548796] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:05.713 [2024-10-01 16:59:56.549181] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:05.713 [2024-10-01 16:59:56.549850] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:05.713 [2024-10-01 16:59:56.550503] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:05.713 [2024-10-01 16:59:56.550640] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 [2024-10-01 16:59:57.248065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 Malloc0 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.713 [2024-10-01 16:59:57.316283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.713 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:05.714 { 00:35:05.714 "params": { 00:35:05.714 "name": "Nvme$subsystem", 00:35:05.714 "trtype": "$TEST_TRANSPORT", 00:35:05.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.714 "adrfam": "ipv4", 00:35:05.714 "trsvcid": "$NVMF_PORT", 00:35:05.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.714 "hdgst": ${hdgst:-false}, 00:35:05.714 "ddgst": ${ddgst:-false} 00:35:05.714 }, 00:35:05.714 "method": "bdev_nvme_attach_controller" 00:35:05.714 } 00:35:05.714 EOF 00:35:05.714 )") 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:35:05.714 16:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:05.714 "params": { 00:35:05.714 "name": "Nvme1", 00:35:05.714 "trtype": "tcp", 00:35:05.714 "traddr": "10.0.0.2", 00:35:05.714 "adrfam": "ipv4", 00:35:05.714 "trsvcid": "4420", 00:35:05.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.714 "hdgst": false, 00:35:05.714 "ddgst": false 00:35:05.714 }, 00:35:05.714 "method": "bdev_nvme_attach_controller" 00:35:05.714 }' 00:35:05.714 [2024-10-01 16:59:57.371017] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:35:05.714 [2024-10-01 16:59:57.371070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947708 ] 00:35:05.975 [2024-10-01 16:59:57.440196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:05.975 [2024-10-01 16:59:57.506518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.975 [2024-10-01 16:59:57.506644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.975 [2024-10-01 16:59:57.506647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.975 I/O targets: 00:35:05.975 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:05.975 00:35:05.975 00:35:05.975 CUnit - A unit testing framework for C - Version 2.1-3 00:35:05.975 http://cunit.sourceforge.net/ 00:35:05.975 00:35:05.975 00:35:05.975 Suite: bdevio tests on: Nvme1n1 00:35:06.236 Test: blockdev write read block ...passed 00:35:06.236 Test: blockdev write zeroes read block ...passed 00:35:06.236 Test: blockdev write zeroes read no split ...passed 00:35:06.236 Test: blockdev write zeroes read split ...passed 00:35:06.236 Test: blockdev write zeroes read split partial ...passed 00:35:06.236 Test: blockdev reset ...[2024-10-01 16:59:57.745601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:06.236 [2024-10-01 16:59:57.745659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2d090 (9): Bad file descriptor 00:35:06.236 [2024-10-01 16:59:57.881053] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:06.236 passed 00:35:06.498 Test: blockdev write read 8 blocks ...passed 00:35:06.498 Test: blockdev write read size > 128k ...passed 00:35:06.498 Test: blockdev write read invalid size ...passed 00:35:06.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:06.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:06.498 Test: blockdev write read max offset ...passed 00:35:06.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:06.498 Test: blockdev writev readv 8 blocks ...passed 00:35:06.498 Test: blockdev writev readv 30 x 1block ...passed 00:35:06.498 Test: blockdev writev readv block ...passed 00:35:06.498 Test: blockdev writev readv size > 128k ...passed 00:35:06.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:06.498 Test: blockdev comparev and writev ...[2024-10-01 16:59:58.146012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.146037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.146059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.146591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.146600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.146615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.147107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.147118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.147129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.147135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.147637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.147647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:06.498 [2024-10-01 16:59:58.147656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.498 [2024-10-01 16:59:58.147662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:06.759 passed 00:35:06.759 Test: blockdev nvme passthru rw ...passed 00:35:06.759 Test: blockdev nvme passthru vendor specific ...[2024-10-01 16:59:58.231618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.759 [2024-10-01 16:59:58.231629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:06.759 [2024-10-01 16:59:58.231960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.759 [2024-10-01 16:59:58.231971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:06.759 [2024-10-01 16:59:58.232321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.759 [2024-10-01 16:59:58.232329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:06.759 [2024-10-01 16:59:58.232659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.759 [2024-10-01 16:59:58.232667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:06.759 passed 00:35:06.759 Test: blockdev nvme admin passthru ...passed 00:35:06.759 Test: blockdev copy ...passed 00:35:06.759 00:35:06.759 Run Summary: Type Total Ran Passed Failed Inactive 00:35:06.759 suites 1 1 n/a 0 0 00:35:06.759 tests 23 23 23 0 0 00:35:06.759 asserts 152 152 152 0 n/a 00:35:06.759 00:35:06.759 Elapsed time = 1.296 seconds 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.759 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.759 rmmod nvme_tcp 00:35:06.759 rmmod nvme_fabrics 00:35:07.021 rmmod nvme_keyring 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2947528 ']' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2947528 ']' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2947528' 00:35:07.021 killing process with pid 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2947528 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.021 16:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.562 00:35:09.562 real 0m11.953s 00:35:09.562 user 0m9.275s 00:35:09.562 sys 0m6.238s 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:09.562 ************************************ 00:35:09.562 END TEST nvmf_bdevio 00:35:09.562 ************************************ 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:09.562 00:35:09.562 real 4m55.860s 00:35:09.562 user 9m49.453s 00:35:09.562 sys 2m3.527s 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:09.562 17:00:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:09.562 ************************************ 00:35:09.562 END TEST nvmf_target_core_interrupt_mode 00:35:09.562 ************************************ 00:35:09.562 17:00:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:09.562 17:00:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:09.562 17:00:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:09.562 17:00:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.562 ************************************ 00:35:09.562 START TEST nvmf_interrupt 00:35:09.562 ************************************ 00:35:09.562 17:00:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:09.562 * Looking for test storage... 00:35:09.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.562 --rc genhtml_branch_coverage=1 00:35:09.562 --rc genhtml_function_coverage=1 00:35:09.562 --rc genhtml_legend=1 00:35:09.562 --rc geninfo_all_blocks=1 00:35:09.562 --rc geninfo_unexecuted_blocks=1 00:35:09.562 00:35:09.562 ' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.562 --rc genhtml_branch_coverage=1 00:35:09.562 --rc genhtml_function_coverage=1 00:35:09.562 --rc genhtml_legend=1 00:35:09.562 --rc geninfo_all_blocks=1 00:35:09.562 --rc geninfo_unexecuted_blocks=1 00:35:09.562 00:35:09.562 ' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.562 --rc genhtml_branch_coverage=1 00:35:09.562 --rc genhtml_function_coverage=1 00:35:09.562 --rc genhtml_legend=1 00:35:09.562 --rc geninfo_all_blocks=1 00:35:09.562 --rc geninfo_unexecuted_blocks=1 00:35:09.562 00:35:09.562 ' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:09.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.562 --rc genhtml_branch_coverage=1 00:35:09.562 --rc genhtml_function_coverage=1 00:35:09.562 --rc genhtml_legend=1 00:35:09.562 --rc geninfo_all_blocks=1 00:35:09.562 --rc geninfo_unexecuted_blocks=1 00:35:09.562 00:35:09.562 ' 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.562 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:09.563 17:00:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.698 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.698 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:17.698 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:17.698 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:17.698 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:17.699 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:17.699 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:17.699 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:17.699 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:17.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:35:17.699 00:35:17.699 --- 10.0.0.2 ping statistics --- 00:35:17.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.699 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:35:17.699 00:35:17.699 --- 10.0.0.1 ping statistics --- 00:35:17.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.699 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2951906 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2951906 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2951906 ']' 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.699 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 [2024-10-01 17:00:08.407259] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:17.700 [2024-10-01 17:00:08.408098] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:35:17.700 [2024-10-01 17:00:08.408142] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.700 [2024-10-01 17:00:08.486732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:17.700 [2024-10-01 17:00:08.575446] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.700 [2024-10-01 17:00:08.575504] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.700 [2024-10-01 17:00:08.575513] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.700 [2024-10-01 17:00:08.575520] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.700 [2024-10-01 17:00:08.575526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.700 [2024-10-01 17:00:08.575650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.700 [2024-10-01 17:00:08.575655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.700 [2024-10-01 17:00:08.650063] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:17.700 [2024-10-01 17:00:08.650190] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:17.700 [2024-10-01 17:00:08.650314] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:17.700 5000+0 records in 00:35:17.700 5000+0 records out 00:35:17.700 10240000 bytes (10 MB, 9.8 MiB) copied, 0.019485 s, 526 MB/s 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 AIO0 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 [2024-10-01 17:00:08.832605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.700 [2024-10-01 17:00:08.884946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2951906 0 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 0 idle 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:17.700 17:00:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951906 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:00.28 reactor_0' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951906 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:00.28 reactor_0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2951906 1 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 1 idle 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951911 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:00.00 reactor_1' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951911 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:00.00 reactor_1 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2952221 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2951906 0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2951906 0 busy 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.700 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.701 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.701 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:17.701 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951906 root 20 0 128.2g 45056 32768 R 99.9 0.0 0:00.43 reactor_0' 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951906 root 20 0 128.2g 45056 32768 R 99.9 0.0 0:00.43 reactor_0 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2951906 1 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2951906 1 busy 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.962 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951911 root 20 0 128.2g 45056 32768 R 99.9 0.0 0:00.25 reactor_1' 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951911 root 20 0 128.2g 45056 32768 R 99.9 0.0 0:00.25 reactor_1 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.223 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:18.224 17:00:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.224 17:00:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2952221 00:35:28.223 Initializing NVMe Controllers 00:35:28.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:28.223 Controller IO queue size 256, less than required. 00:35:28.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:28.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:28.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:28.223 Initialization complete. Launching workers. 00:35:28.223 ======================================================== 00:35:28.223 Latency(us) 00:35:28.223 Device Information : IOPS MiB/s Average min max 00:35:28.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17755.00 69.36 14426.07 4224.05 35353.59 00:35:28.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 23187.90 90.58 11044.67 2200.57 15480.41 00:35:28.223 ======================================================== 00:35:28.223 Total : 40942.89 159.93 12511.02 2200.57 35353.59 00:35:28.223 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2951906 0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 0 idle 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951906 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:20.30 reactor_0' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951906 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:20.30 reactor_0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2951906 1 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 1 idle 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951911 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:10.00 reactor_1' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951911 root 20 0 128.2g 45056 32768 S 0.0 0.0 0:10.00 reactor_1 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.223 17:00:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:28.794 17:00:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:28.794 17:00:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:35:28.794 17:00:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:28.794 17:00:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:28.794 17:00:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2951906 0 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 0 idle 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:30.707 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951906 root 20 0 128.2g 79872 32768 S 6.7 0.1 0:20.56 reactor_0' 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951906 root 20 0 128.2g 79872 32768 S 6.7 0.1 0:20.56 reactor_0 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2951906 1 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2951906 1 idle 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2951906 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2951906 -w 256 00:35:30.968 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2951911 root 20 0 128.2g 79872 32768 S 0.0 0.1 0:10.05 reactor_1' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2951911 root 20 0 128.2g 79872 32768 S 0.0 0.1 0:10.05 reactor_1 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:31.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:31.229 rmmod nvme_tcp 00:35:31.229 rmmod nvme_fabrics 00:35:31.229 rmmod nvme_keyring 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2951906 ']' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2951906 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2951906 ']' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2951906 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:31.229 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2951906 00:35:31.497 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:31.497 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:31.497 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2951906' 00:35:31.497 killing process with pid 2951906 00:35:31.497 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2951906 00:35:31.497 17:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2951906 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.497 17:00:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.100 17:00:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.100 00:35:34.100 real 0m24.286s 00:35:34.100 user 0m39.863s 00:35:34.100 sys 0m9.579s 00:35:34.100 17:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:34.100 17:00:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.100 ************************************ 00:35:34.100 END TEST nvmf_interrupt 00:35:34.100 ************************************ 00:35:34.100 00:35:34.100 real 29m47.157s 00:35:34.100 user 60m52.110s 00:35:34.100 sys 9m45.567s 00:35:34.100 17:00:25 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:34.100 17:00:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.100 ************************************ 00:35:34.100 END TEST nvmf_tcp 00:35:34.100 ************************************ 00:35:34.100 17:00:25 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:34.100 17:00:25 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.100 17:00:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:34.100 17:00:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:34.100 17:00:25 -- common/autotest_common.sh@10 -- # set +x 00:35:34.100 ************************************ 00:35:34.100 START TEST spdkcli_nvmf_tcp 00:35:34.100 ************************************ 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:34.100 * Looking for test storage... 00:35:34.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.100 --rc genhtml_branch_coverage=1 00:35:34.100 --rc genhtml_function_coverage=1 00:35:34.100 --rc genhtml_legend=1 00:35:34.100 --rc geninfo_all_blocks=1 00:35:34.100 --rc geninfo_unexecuted_blocks=1 00:35:34.100 00:35:34.100 ' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.100 --rc genhtml_branch_coverage=1 00:35:34.100 --rc genhtml_function_coverage=1 00:35:34.100 --rc genhtml_legend=1 00:35:34.100 --rc geninfo_all_blocks=1 00:35:34.100 --rc geninfo_unexecuted_blocks=1 00:35:34.100 00:35:34.100 ' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.100 --rc genhtml_branch_coverage=1 00:35:34.100 --rc genhtml_function_coverage=1 00:35:34.100 --rc genhtml_legend=1 00:35:34.100 --rc geninfo_all_blocks=1 00:35:34.100 --rc geninfo_unexecuted_blocks=1 00:35:34.100 00:35:34.100 ' 00:35:34.100 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.101 --rc genhtml_branch_coverage=1 00:35:34.101 --rc genhtml_function_coverage=1 00:35:34.101 --rc genhtml_legend=1 00:35:34.101 --rc geninfo_all_blocks=1 00:35:34.101 --rc geninfo_unexecuted_blocks=1 00:35:34.101 00:35:34.101 ' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2955498 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2955498 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2955498 ']' 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:34.101 17:00:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 [2024-10-01 17:00:25.595777] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:35:34.101 [2024-10-01 17:00:25.595844] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955498 ] 00:35:34.101 [2024-10-01 17:00:25.677703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:34.101 [2024-10-01 17:00:25.769433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.101 [2024-10-01 17:00:25.769440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.040 17:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:35.040 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:35.040 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:35.040 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:35.040 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:35.040 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:35.040 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:35.040 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.040 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.040 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:35.040 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:35.040 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:35.040 ' 00:35:37.580 [2024-10-01 17:00:29.079567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.957 [2024-10-01 17:00:30.287517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:40.864 [2024-10-01 17:00:32.505908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:42.779 [2024-10-01 17:00:34.411519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:44.691 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:44.691 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:44.691 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:44.691 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:44.691 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:44.691 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:44.691 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:44.691 17:00:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:44.691 17:00:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:44.691 17:00:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.691 17:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:44.691 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:44.691 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.691 17:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:44.691 17:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.951 17:00:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:44.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:44.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:44.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:44.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:44.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:44.951 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:44.951 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:44.951 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:44.951 ' 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:50.233 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:50.233 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:50.233 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:50.233 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2955498 ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2955498' 00:35:50.233 killing process with pid 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2955498 ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2955498 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2955498 ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2955498 00:35:50.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2955498) - No such process 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2955498 is not found' 00:35:50.233 Process with pid 2955498 is not found 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:50.233 00:35:50.233 real 0m16.570s 00:35:50.233 user 0m34.369s 00:35:50.233 sys 0m0.865s 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.233 17:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.233 ************************************ 00:35:50.233 END TEST spdkcli_nvmf_tcp 00:35:50.233 ************************************ 00:35:50.233 17:00:41 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:50.233 17:00:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:50.233 17:00:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.233 17:00:41 -- common/autotest_common.sh@10 -- # set +x 00:35:50.494 ************************************ 00:35:50.494 START TEST nvmf_identify_passthru 00:35:50.494 ************************************ 00:35:50.494 17:00:41 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:50.494 * Looking for test storage... 00:35:50.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.494 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:50.494 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:50.494 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:35:50.494 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.494 17:00:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:50.495 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.495 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.495 --rc genhtml_branch_coverage=1 00:35:50.495 --rc genhtml_function_coverage=1 00:35:50.495 --rc genhtml_legend=1 00:35:50.495 --rc geninfo_all_blocks=1 00:35:50.495 --rc geninfo_unexecuted_blocks=1 00:35:50.495 00:35:50.495 ' 00:35:50.495 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.495 --rc genhtml_branch_coverage=1 00:35:50.495 --rc genhtml_function_coverage=1 00:35:50.495 --rc genhtml_legend=1 00:35:50.495 --rc geninfo_all_blocks=1 00:35:50.495 --rc geninfo_unexecuted_blocks=1 00:35:50.495 00:35:50.495 ' 00:35:50.495 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.495 --rc genhtml_branch_coverage=1 00:35:50.495 --rc genhtml_function_coverage=1 00:35:50.495 --rc genhtml_legend=1 00:35:50.495 --rc geninfo_all_blocks=1 00:35:50.495 --rc geninfo_unexecuted_blocks=1 00:35:50.495 00:35:50.495 ' 00:35:50.495 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.495 --rc genhtml_branch_coverage=1 00:35:50.495 --rc genhtml_function_coverage=1 00:35:50.495 --rc genhtml_legend=1 00:35:50.495 --rc geninfo_all_blocks=1 00:35:50.495 --rc geninfo_unexecuted_blocks=1 00:35:50.495 00:35:50.495 ' 00:35:50.495 17:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:50.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.495 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.495 17:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.495 17:00:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.495 17:00:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.755 17:00:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.755 17:00:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:50.755 17:00:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.755 17:00:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.756 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:50.756 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:50.756 17:00:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.756 17:00:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:58.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:58.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.894 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:58.895 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:58.895 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:35:58.895 00:35:58.895 --- 10.0.0.2 ping statistics --- 00:35:58.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.895 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:35:58.895 00:35:58.895 --- 10.0.0.1 ping statistics --- 00:35:58.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.895 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:58.895 17:00:49 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:58.895 17:00:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:58.895 17:00:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:04.180 17:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512038S2P0BGN 00:36:04.180 17:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:04.180 17:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:04.180 17:00:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:08.384 17:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:08.384 17:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:08.384 17:00:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.384 17:00:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.384 17:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:08.384 17:00:59 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:08.384 17:00:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.384 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2963490 00:36:08.384 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:08.384 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:08.384 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2963490 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2963490 ']' 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.384 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:08.384 [2024-10-01 17:01:00.054905] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:36:08.384 [2024-10-01 17:01:00.054965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.645 [2024-10-01 17:01:00.139097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:08.645 [2024-10-01 17:01:00.211339] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:08.645 [2024-10-01 17:01:00.211395] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:08.645 [2024-10-01 17:01:00.211403] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:08.645 [2024-10-01 17:01:00.211410] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:08.645 [2024-10-01 17:01:00.211415] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:08.645 [2024-10-01 17:01:00.211526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:08.645 [2024-10-01 17:01:00.211657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:08.645 [2024-10-01 17:01:00.211777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:08.645 [2024-10-01 17:01:00.211780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:36:09.589 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 INFO: Log level set to 20 00:36:09.589 INFO: Requests: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "method": "nvmf_set_config", 00:36:09.589 "id": 1, 00:36:09.589 "params": { 00:36:09.589 "admin_cmd_passthru": { 00:36:09.589 "identify_ctrlr": true 00:36:09.589 } 00:36:09.589 } 00:36:09.589 } 00:36:09.589 00:36:09.589 INFO: response: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "id": 1, 00:36:09.589 "result": true 00:36:09.589 } 00:36:09.589 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.589 17:01:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.589 17:01:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 INFO: Setting log level to 20 00:36:09.589 INFO: Setting log level to 20 00:36:09.589 INFO: Log level set to 20 00:36:09.589 INFO: Log level set to 20 00:36:09.589 INFO: Requests: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "method": "framework_start_init", 00:36:09.589 "id": 1 00:36:09.589 } 00:36:09.589 00:36:09.589 INFO: Requests: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "method": "framework_start_init", 00:36:09.589 "id": 1 00:36:09.589 } 00:36:09.589 00:36:09.589 [2024-10-01 17:01:01.022206] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:09.589 INFO: response: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "id": 1, 00:36:09.589 "result": true 00:36:09.589 } 00:36:09.589 00:36:09.589 INFO: response: 00:36:09.589 { 00:36:09.589 "jsonrpc": "2.0", 00:36:09.589 "id": 1, 00:36:09.589 "result": true 00:36:09.589 } 00:36:09.589 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.589 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 INFO: Setting log level to 40 00:36:09.589 INFO: Setting log level to 40 00:36:09.589 INFO: Setting log level to 40 00:36:09.589 [2024-10-01 17:01:01.035488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.589 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:09.589 17:01:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.589 17:01:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.887 Nvme0n1 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.887 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.887 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.887 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.887 [2024-10-01 17:01:03.942521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.887 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.887 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.887 [ 00:36:12.887 { 00:36:12.887 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:12.887 "subtype": "Discovery", 00:36:12.887 "listen_addresses": [], 00:36:12.887 "allow_any_host": true, 00:36:12.887 "hosts": [] 00:36:12.887 }, 00:36:12.887 { 00:36:12.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:12.887 "subtype": "NVMe", 00:36:12.887 "listen_addresses": [ 00:36:12.887 { 00:36:12.887 "trtype": "TCP", 00:36:12.887 "adrfam": "IPv4", 00:36:12.887 "traddr": "10.0.0.2", 00:36:12.887 "trsvcid": "4420" 00:36:12.887 } 00:36:12.887 ], 00:36:12.887 "allow_any_host": true, 00:36:12.887 "hosts": [], 00:36:12.887 "serial_number": "SPDK00000000000001", 00:36:12.887 "model_number": "SPDK bdev Controller", 00:36:12.887 "max_namespaces": 1, 00:36:12.888 "min_cntlid": 1, 00:36:12.888 "max_cntlid": 65519, 00:36:12.888 "namespaces": [ 00:36:12.888 { 00:36:12.888 "nsid": 1, 00:36:12.888 "bdev_name": "Nvme0n1", 00:36:12.888 "name": "Nvme0n1", 00:36:12.888 "nguid": "8F12E61E93A34DD793481BEE541442C3", 00:36:12.888 "uuid": "8f12e61e-93a3-4dd7-9348-1bee541442c3" 00:36:12.888 } 00:36:12.888 ] 00:36:12.888 } 00:36:12.888 ] 00:36:12.888 17:01:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.888 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:12.888 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:12.888 17:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512038S2P0BGN 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9512038S2P0BGN '!=' PHLJ9512038S2P0BGN ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:12.888 17:01:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.888 rmmod nvme_tcp 00:36:12.888 rmmod nvme_fabrics 00:36:12.888 rmmod nvme_keyring 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2963490 ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2963490 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2963490 ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2963490 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2963490 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2963490' 00:36:12.888 killing process with pid 2963490 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2963490 00:36:12.888 17:01:04 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2963490 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.430 17:01:06 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.430 17:01:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:15.430 17:01:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.366 17:01:08 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.366 00:36:17.366 real 0m26.996s 00:36:17.366 user 0m35.751s 00:36:17.366 sys 0m7.438s 00:36:17.366 17:01:08 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.366 17:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.366 ************************************ 00:36:17.366 END TEST nvmf_identify_passthru 00:36:17.366 ************************************ 00:36:17.366 17:01:08 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:17.366 17:01:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:17.366 17:01:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:17.366 17:01:08 -- common/autotest_common.sh@10 -- # set +x 00:36:17.366 ************************************ 00:36:17.366 START TEST nvmf_dif 00:36:17.366 ************************************ 00:36:17.366 17:01:09 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:17.626 * Looking for test storage... 00:36:17.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:17.626 17:01:09 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:17.626 17:01:09 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:36:17.626 17:01:09 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:17.626 17:01:09 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.626 17:01:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:17.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.627 --rc genhtml_branch_coverage=1 00:36:17.627 --rc genhtml_function_coverage=1 00:36:17.627 --rc genhtml_legend=1 00:36:17.627 --rc geninfo_all_blocks=1 00:36:17.627 --rc geninfo_unexecuted_blocks=1 00:36:17.627 00:36:17.627 ' 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:17.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.627 --rc genhtml_branch_coverage=1 00:36:17.627 --rc genhtml_function_coverage=1 00:36:17.627 --rc genhtml_legend=1 00:36:17.627 --rc geninfo_all_blocks=1 00:36:17.627 --rc geninfo_unexecuted_blocks=1 00:36:17.627 00:36:17.627 ' 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:17.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.627 --rc genhtml_branch_coverage=1 00:36:17.627 --rc genhtml_function_coverage=1 00:36:17.627 --rc genhtml_legend=1 00:36:17.627 --rc geninfo_all_blocks=1 00:36:17.627 --rc geninfo_unexecuted_blocks=1 00:36:17.627 00:36:17.627 ' 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:17.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.627 --rc genhtml_branch_coverage=1 00:36:17.627 --rc genhtml_function_coverage=1 00:36:17.627 --rc genhtml_legend=1 00:36:17.627 --rc geninfo_all_blocks=1 00:36:17.627 --rc geninfo_unexecuted_blocks=1 00:36:17.627 00:36:17.627 ' 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.627 17:01:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.627 17:01:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.627 17:01:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.627 17:01:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.627 17:01:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:17.627 17:01:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:17.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:17.627 17:01:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:17.627 17:01:09 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:17.627 17:01:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:25.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:25.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:25.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:25.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:25.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:36:25.768 00:36:25.768 --- 10.0.0.2 ping statistics --- 00:36:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.768 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:36:25.768 00:36:25.768 --- 10.0.0.1 ping statistics --- 00:36:25.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.768 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:25.768 17:01:16 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:27.681 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:27.681 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:27.681 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:27.963 17:01:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:27.963 17:01:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2969799 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2969799 00:36:27.963 17:01:19 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2969799 ']' 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:27.963 17:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.963 [2024-10-01 17:01:19.615764] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:36:27.963 [2024-10-01 17:01:19.615862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.263 [2024-10-01 17:01:19.708285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.263 [2024-10-01 17:01:19.798796] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.263 [2024-10-01 17:01:19.798853] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.263 [2024-10-01 17:01:19.798861] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.263 [2024-10-01 17:01:19.798868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.263 [2024-10-01 17:01:19.798874] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.263 [2024-10-01 17:01:19.798909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.838 17:01:20 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:28.838 17:01:20 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:36:28.838 17:01:20 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:28.838 17:01:20 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:28.838 17:01:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 17:01:20 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.100 17:01:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:29.100 17:01:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 [2024-10-01 17:01:20.559975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.100 17:01:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 ************************************ 00:36:29.100 START TEST fio_dif_1_default 00:36:29.100 ************************************ 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 bdev_null0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.100 [2024-10-01 17:01:20.648385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:29.100 { 00:36:29.100 "params": { 00:36:29.100 "name": "Nvme$subsystem", 00:36:29.100 "trtype": "$TEST_TRANSPORT", 00:36:29.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.100 "adrfam": "ipv4", 00:36:29.100 "trsvcid": "$NVMF_PORT", 00:36:29.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.100 "hdgst": ${hdgst:-false}, 00:36:29.100 "ddgst": ${ddgst:-false} 00:36:29.100 }, 00:36:29.100 "method": "bdev_nvme_attach_controller" 00:36:29.100 } 00:36:29.100 EOF 00:36:29.100 )") 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:29.100 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:29.101 "params": { 00:36:29.101 "name": "Nvme0", 00:36:29.101 "trtype": "tcp", 00:36:29.101 "traddr": "10.0.0.2", 00:36:29.101 "adrfam": "ipv4", 00:36:29.101 "trsvcid": "4420", 00:36:29.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.101 "hdgst": false, 00:36:29.101 "ddgst": false 00:36:29.101 }, 00:36:29.101 "method": "bdev_nvme_attach_controller" 00:36:29.101 }' 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:29.101 17:01:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.362 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:29.362 fio-3.35 00:36:29.362 Starting 1 thread 00:36:41.594 00:36:41.594 filename0: (groupid=0, jobs=1): err= 0: pid=2970272: Tue Oct 1 17:01:31 2024 00:36:41.594 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:36:41.594 slat (nsec): min=7284, max=59042, avg=7580.40, stdev=2068.03 00:36:41.594 clat (usec): min=40862, max=44680, avg=40998.42, stdev=243.03 00:36:41.594 lat (usec): min=40870, max=44723, avg=41006.00, stdev=243.99 00:36:41.594 clat percentiles (usec): 00:36:41.594 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:41.594 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:41.594 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:41.594 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:36:41.594 | 99.99th=[44827] 00:36:41.594 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:36:41.594 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:41.594 lat (msec) : 50=100.00% 00:36:41.594 cpu : usr=93.61%, sys=6.15%, ctx=10, majf=0, minf=228 00:36:41.594 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.594 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.594 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:41.594 00:36:41.594 Run status group 0 (all jobs): 00:36:41.594 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.594 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 00:36:41.595 real 0m11.076s 00:36:41.595 user 0m15.888s 00:36:41.595 sys 0m0.958s 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 ************************************ 00:36:41.595 END TEST fio_dif_1_default 00:36:41.595 ************************************ 00:36:41.595 17:01:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:41.595 17:01:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:41.595 17:01:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 ************************************ 00:36:41.595 START TEST fio_dif_1_multi_subsystems 00:36:41.595 ************************************ 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 bdev_null0 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 [2024-10-01 17:01:31.801872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 bdev_null1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:41.595 { 00:36:41.595 "params": { 00:36:41.595 "name": "Nvme$subsystem", 00:36:41.595 "trtype": "$TEST_TRANSPORT", 00:36:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.595 "adrfam": "ipv4", 00:36:41.595 "trsvcid": "$NVMF_PORT", 00:36:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.595 "hdgst": ${hdgst:-false}, 00:36:41.595 "ddgst": ${ddgst:-false} 00:36:41.595 }, 00:36:41.595 "method": "bdev_nvme_attach_controller" 00:36:41.595 } 00:36:41.595 EOF 00:36:41.595 )") 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:41.595 { 00:36:41.595 "params": { 00:36:41.595 "name": "Nvme$subsystem", 00:36:41.595 "trtype": "$TEST_TRANSPORT", 00:36:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.595 "adrfam": "ipv4", 00:36:41.595 "trsvcid": "$NVMF_PORT", 00:36:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.595 "hdgst": ${hdgst:-false}, 00:36:41.595 "ddgst": ${ddgst:-false} 00:36:41.595 }, 00:36:41.595 "method": "bdev_nvme_attach_controller" 00:36:41.595 } 00:36:41.595 EOF 00:36:41.595 )") 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:36:41.595 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:41.595 "params": { 00:36:41.595 "name": "Nvme0", 00:36:41.595 "trtype": "tcp", 00:36:41.595 "traddr": "10.0.0.2", 00:36:41.595 "adrfam": "ipv4", 00:36:41.595 "trsvcid": "4420", 00:36:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:41.595 "hdgst": false, 00:36:41.595 "ddgst": false 00:36:41.595 }, 00:36:41.595 "method": "bdev_nvme_attach_controller" 00:36:41.595 },{ 00:36:41.595 "params": { 00:36:41.595 "name": "Nvme1", 00:36:41.595 "trtype": "tcp", 00:36:41.595 "traddr": "10.0.0.2", 00:36:41.595 "adrfam": "ipv4", 00:36:41.596 "trsvcid": "4420", 00:36:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:41.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:41.596 "hdgst": false, 00:36:41.596 "ddgst": false 00:36:41.596 }, 00:36:41.596 "method": "bdev_nvme_attach_controller" 00:36:41.596 }' 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:41.596 17:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.596 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.596 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.596 fio-3.35 00:36:41.596 Starting 2 threads 00:36:51.591 00:36:51.591 filename0: (groupid=0, jobs=1): err= 0: pid=2972281: Tue Oct 1 17:01:43 2024 00:36:51.591 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10007msec) 00:36:51.591 slat (nsec): min=7266, max=41754, avg=7691.10, stdev=1588.38 00:36:51.591 clat (usec): min=467, max=42007, avg=40822.07, stdev=2585.52 00:36:51.591 lat (usec): min=474, max=42015, avg=40829.76, stdev=2585.52 00:36:51.591 clat percentiles (usec): 00:36:51.591 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:51.591 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.591 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:51.591 | 99.99th=[42206] 00:36:51.591 bw ( KiB/s): min= 384, max= 416, per=49.89%, avg=390.40, stdev=13.13, samples=20 00:36:51.591 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:51.591 lat (usec) : 500=0.41% 00:36:51.591 lat (msec) : 50=99.59% 00:36:51.591 cpu : usr=95.54%, sys=4.24%, ctx=10, majf=0, minf=212 00:36:51.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.591 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:51.591 filename1: (groupid=0, jobs=1): err= 0: pid=2972282: Tue Oct 1 17:01:43 2024 00:36:51.591 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:36:51.591 slat (nsec): min=7258, max=35050, avg=7582.68, stdev=977.34 00:36:51.591 clat (usec): min=40830, max=42473, avg=40993.67, stdev=135.97 00:36:51.591 lat (usec): min=40838, max=42508, avg=41001.26, stdev=136.42 00:36:51.591 clat percentiles (usec): 00:36:51.591 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:51.591 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.591 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:51.591 | 99.99th=[42730] 00:36:51.591 bw ( KiB/s): min= 384, max= 416, per=49.63%, avg=388.80, stdev=11.72, samples=20 00:36:51.591 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:51.591 lat (msec) : 50=100.00% 00:36:51.591 cpu : usr=95.86%, sys=3.92%, ctx=17, majf=0, minf=71 00:36:51.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.591 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:51.591 00:36:51.591 Run status group 0 (all jobs): 00:36:51.591 READ: bw=782KiB/s (801kB/s), 390KiB/s-392KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10007-10008msec 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.591 00:36:51.591 real 0m11.435s 00:36:51.591 user 0m30.754s 00:36:51.591 sys 0m1.145s 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 ************************************ 00:36:51.591 END TEST fio_dif_1_multi_subsystems 00:36:51.591 ************************************ 00:36:51.591 17:01:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:51.591 17:01:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:51.591 17:01:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:51.591 17:01:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.591 ************************************ 00:36:51.591 START TEST fio_dif_rand_params 00:36:51.591 ************************************ 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:51.591 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:51.851 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.852 bdev_null0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.852 [2024-10-01 17:01:43.316730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:51.852 { 00:36:51.852 "params": { 00:36:51.852 "name": "Nvme$subsystem", 00:36:51.852 "trtype": "$TEST_TRANSPORT", 00:36:51.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.852 "adrfam": "ipv4", 00:36:51.852 "trsvcid": "$NVMF_PORT", 00:36:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.852 "hdgst": ${hdgst:-false}, 00:36:51.852 "ddgst": ${ddgst:-false} 00:36:51.852 }, 00:36:51.852 "method": "bdev_nvme_attach_controller" 00:36:51.852 } 00:36:51.852 EOF 00:36:51.852 )") 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:51.852 "params": { 00:36:51.852 "name": "Nvme0", 00:36:51.852 "trtype": "tcp", 00:36:51.852 "traddr": "10.0.0.2", 00:36:51.852 "adrfam": "ipv4", 00:36:51.852 "trsvcid": "4420", 00:36:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.852 "hdgst": false, 00:36:51.852 "ddgst": false 00:36:51.852 }, 00:36:51.852 "method": "bdev_nvme_attach_controller" 00:36:51.852 }' 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:51.852 17:01:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.112 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:52.112 ... 00:36:52.112 fio-3.35 00:36:52.112 Starting 3 threads 00:36:58.776 00:36:58.776 filename0: (groupid=0, jobs=1): err= 0: pid=2974250: Tue Oct 1 17:01:49 2024 00:36:58.776 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(154MiB/5005msec) 00:36:58.776 slat (nsec): min=7349, max=45241, avg=9681.99, stdev=2096.49 00:36:58.776 clat (usec): min=4755, max=52495, avg=12201.56, stdev=5706.61 00:36:58.776 lat (usec): min=4763, max=52503, avg=12211.24, stdev=5706.88 00:36:58.776 clat percentiles (usec): 00:36:58.776 | 1.00th=[ 6718], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10159], 00:36:58.776 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:36:58.776 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13566], 00:36:58.776 | 99.00th=[46924], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:36:58.776 | 99.99th=[52691] 00:36:58.776 bw ( KiB/s): min=15360, max=35584, per=31.91%, avg=31411.20, stdev=5802.73, samples=10 00:36:58.776 iops : min= 120, max= 278, avg=245.40, stdev=45.33, samples=10 00:36:58.776 lat (msec) : 10=18.39%, 20=79.41%, 50=1.22%, 100=0.98% 00:36:58.776 cpu : usr=95.18%, sys=4.44%, ctx=77, majf=0, minf=113 00:36:58.776 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 issued rwts: total=1229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.776 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.776 filename0: (groupid=0, jobs=1): err= 0: pid=2974251: Tue Oct 1 17:01:49 2024 00:36:58.776 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(149MiB/5045msec) 00:36:58.776 slat (nsec): min=7359, max=49529, avg=8835.30, stdev=2334.74 00:36:58.776 clat (usec): min=5905, max=53904, avg=12691.07, stdev=5449.43 00:36:58.776 lat (usec): min=5914, max=53913, avg=12699.91, stdev=5449.66 00:36:58.776 clat percentiles (usec): 00:36:58.776 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10814], 00:36:58.776 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:36:58.776 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:36:58.776 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52167], 99.95th=[53740], 00:36:58.776 | 99.99th=[53740] 00:36:58.776 bw ( KiB/s): min=17408, max=34304, per=30.83%, avg=30348.10, stdev=4702.74, samples=10 00:36:58.776 iops : min= 136, max= 268, avg=237.00, stdev=36.74, samples=10 00:36:58.776 lat (msec) : 10=10.86%, 20=86.95%, 50=1.94%, 100=0.25% 00:36:58.776 cpu : usr=95.22%, sys=4.50%, ctx=10, majf=0, minf=101 00:36:58.776 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.776 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.776 filename0: (groupid=0, jobs=1): err= 0: pid=2974252: Tue Oct 1 17:01:49 2024 00:36:58.776 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(183MiB/5044msec) 00:36:58.776 slat (nsec): min=7310, max=49068, avg=8789.19, stdev=1949.30 00:36:58.776 clat (usec): min=4742, max=49529, avg=10302.83, stdev=4392.01 00:36:58.776 lat (usec): min=4749, max=49537, avg=10311.62, stdev=4392.03 00:36:58.776 clat percentiles (usec): 00:36:58.776 | 1.00th=[ 5538], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 9110], 00:36:58.776 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:36:58.776 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:36:58.776 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47449], 99.95th=[49546], 00:36:58.776 | 99.99th=[49546] 00:36:58.776 bw ( KiB/s): min=27136, max=40529, per=38.01%, avg=37417.40, stdev=3813.87, samples=10 00:36:58.776 iops : min= 212, max= 316, avg=292.20, stdev=29.72, samples=10 00:36:58.776 lat (msec) : 10=54.07%, 20=44.57%, 50=1.37% 00:36:58.776 cpu : usr=94.71%, sys=5.04%, ctx=9, majf=0, minf=116 00:36:58.776 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.776 issued rwts: total=1463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.776 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:58.776 00:36:58.776 Run status group 0 (all jobs): 00:36:58.777 READ: bw=96.1MiB/s (101MB/s), 29.4MiB/s-36.3MiB/s (30.9MB/s-38.0MB/s), io=485MiB (509MB), run=5005-5045msec 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 bdev_null0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 [2024-10-01 17:01:49.484704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 bdev_null1 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 bdev_null2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:58.777 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:58.777 { 00:36:58.777 "params": { 00:36:58.777 "name": "Nvme$subsystem", 00:36:58.777 "trtype": "$TEST_TRANSPORT", 00:36:58.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "$NVMF_PORT", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.778 "hdgst": ${hdgst:-false}, 00:36:58.778 "ddgst": ${ddgst:-false} 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 } 00:36:58.778 EOF 00:36:58.778 )") 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:58.778 { 00:36:58.778 "params": { 00:36:58.778 "name": "Nvme$subsystem", 00:36:58.778 "trtype": "$TEST_TRANSPORT", 00:36:58.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "$NVMF_PORT", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.778 "hdgst": ${hdgst:-false}, 00:36:58.778 "ddgst": ${ddgst:-false} 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 } 00:36:58.778 EOF 00:36:58.778 )") 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:58.778 { 00:36:58.778 "params": { 00:36:58.778 "name": "Nvme$subsystem", 00:36:58.778 "trtype": "$TEST_TRANSPORT", 00:36:58.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "$NVMF_PORT", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.778 "hdgst": ${hdgst:-false}, 00:36:58.778 "ddgst": ${ddgst:-false} 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 } 00:36:58.778 EOF 00:36:58.778 )") 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:58.778 "params": { 00:36:58.778 "name": "Nvme0", 00:36:58.778 "trtype": "tcp", 00:36:58.778 "traddr": "10.0.0.2", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "4420", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.778 "hdgst": false, 00:36:58.778 "ddgst": false 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 },{ 00:36:58.778 "params": { 00:36:58.778 "name": "Nvme1", 00:36:58.778 "trtype": "tcp", 00:36:58.778 "traddr": "10.0.0.2", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "4420", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:58.778 "hdgst": false, 00:36:58.778 "ddgst": false 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 },{ 00:36:58.778 "params": { 00:36:58.778 "name": "Nvme2", 00:36:58.778 "trtype": "tcp", 00:36:58.778 "traddr": "10.0.0.2", 00:36:58.778 "adrfam": "ipv4", 00:36:58.778 "trsvcid": "4420", 00:36:58.778 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:58.778 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:58.778 "hdgst": false, 00:36:58.778 "ddgst": false 00:36:58.778 }, 00:36:58.778 "method": "bdev_nvme_attach_controller" 00:36:58.778 }' 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:58.778 17:01:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.778 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.778 ... 00:36:58.778 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.778 ... 00:36:58.778 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:58.778 ... 00:36:58.778 fio-3.35 00:36:58.778 Starting 24 threads 00:37:11.013 00:37:11.013 filename0: (groupid=0, jobs=1): err= 0: pid=2975619: Tue Oct 1 17:02:01 2024 00:37:11.013 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10027msec) 00:37:11.013 slat (nsec): min=7273, max=69392, avg=9173.37, stdev=4225.86 00:37:11.013 clat (usec): min=2128, max=32026, avg=29466.19, stdev=3272.51 00:37:11.013 lat (usec): min=2145, max=32035, avg=29475.36, stdev=3271.01 00:37:11.013 clat percentiles (usec): 00:37:11.013 | 1.00th=[ 5211], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:11.013 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:11.013 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:11.013 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:37:11.013 | 99.99th=[32113] 00:37:11.013 bw ( KiB/s): min= 2048, max= 2736, per=4.22%, avg=2165.60, stdev=146.85, samples=20 00:37:11.013 iops : min= 512, max= 684, avg=541.40, stdev=36.71, samples=20 00:37:11.013 lat (msec) : 4=0.59%, 10=0.72%, 20=0.68%, 50=98.01% 00:37:11.013 cpu : usr=98.96%, sys=0.71%, ctx=24, majf=0, minf=51 00:37:11.013 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:11.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 issued rwts: total=5430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.013 filename0: (groupid=0, jobs=1): err= 0: pid=2975620: Tue Oct 1 17:02:01 2024 00:37:11.013 read: IOPS=534, BW=2139KiB/s (2191kB/s)(20.9MiB/10003msec) 00:37:11.013 slat (usec): min=5, max=150, avg=37.65, stdev=22.61 00:37:11.013 clat (usec): min=10454, max=46905, avg=29560.21, stdev=1845.60 00:37:11.013 lat (usec): min=10465, max=46923, avg=29597.86, stdev=1847.61 00:37:11.013 clat percentiles (usec): 00:37:11.013 | 1.00th=[21103], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:37:11.013 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:11.013 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.013 | 99.00th=[31327], 99.50th=[41157], 99.90th=[45351], 99.95th=[46924], 00:37:11.013 | 99.99th=[46924] 00:37:11.013 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2137.58, stdev=58.27, samples=19 00:37:11.013 iops : min= 512, max= 544, avg=534.32, stdev=14.58, samples=19 00:37:11.013 lat (msec) : 20=0.60%, 50=99.40% 00:37:11.013 cpu : usr=98.90%, sys=0.65%, ctx=116, majf=0, minf=24 00:37:11.013 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:11.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.013 filename0: (groupid=0, jobs=1): err= 0: pid=2975621: Tue Oct 1 17:02:01 2024 00:37:11.013 read: IOPS=533, BW=2134KiB/s (2186kB/s)(20.9MiB/10015msec) 00:37:11.013 slat (usec): min=7, max=105, avg=30.44, stdev=17.63 00:37:11.013 clat (usec): min=19163, max=31880, avg=29735.83, stdev=741.23 00:37:11.013 lat (usec): min=19188, max=31891, avg=29766.28, stdev=740.21 00:37:11.013 clat percentiles (usec): 00:37:11.013 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:11.013 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.013 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.013 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:37:11.013 | 99.99th=[31851] 00:37:11.013 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2135.32, stdev=61.54, samples=19 00:37:11.013 iops : min= 510, max= 544, avg=533.79, stdev=15.45, samples=19 00:37:11.013 lat (msec) : 20=0.30%, 50=99.70% 00:37:11.013 cpu : usr=98.90%, sys=0.75%, ctx=12, majf=0, minf=25 00:37:11.013 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.013 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.013 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.013 filename0: (groupid=0, jobs=1): err= 0: pid=2975622: Tue Oct 1 17:02:01 2024 00:37:11.013 read: IOPS=532, BW=2132KiB/s (2183kB/s)(20.8MiB/10005msec) 00:37:11.013 slat (usec): min=7, max=177, avg=25.62, stdev=19.55 00:37:11.013 clat (usec): min=16421, max=49241, avg=29774.76, stdev=2460.47 00:37:11.013 lat (usec): min=16429, max=49248, avg=29800.38, stdev=2460.95 00:37:11.013 clat percentiles (usec): 00:37:11.013 | 1.00th=[21890], 5.00th=[28443], 10.00th=[29230], 20.00th=[29492], 00:37:11.013 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.013 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:11.013 | 99.00th=[41681], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:37:11.013 | 99.99th=[49021] 00:37:11.013 bw ( KiB/s): min= 2043, max= 2368, per=4.15%, avg=2130.26, stdev=87.62, samples=19 00:37:11.013 iops : min= 510, max= 592, avg=532.53, stdev=21.95, samples=19 00:37:11.014 lat (msec) : 20=0.77%, 50=99.23% 00:37:11.014 cpu : usr=98.86%, sys=0.72%, ctx=37, majf=0, minf=33 00:37:11.014 IO depths : 1=5.2%, 2=10.7%, 4=22.4%, 8=54.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:11.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 issued rwts: total=5332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.014 filename0: (groupid=0, jobs=1): err= 0: pid=2975623: Tue Oct 1 17:02:01 2024 00:37:11.014 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10023msec) 00:37:11.014 slat (nsec): min=7278, max=98486, avg=19093.07, stdev=14793.03 00:37:11.014 clat (usec): min=22442, max=37035, avg=29838.43, stdev=786.98 00:37:11.014 lat (usec): min=22450, max=37056, avg=29857.53, stdev=785.67 00:37:11.014 clat percentiles (usec): 00:37:11.014 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.014 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.014 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:11.014 | 99.00th=[31065], 99.50th=[31589], 99.90th=[36963], 99.95th=[36963], 00:37:11.014 | 99.99th=[36963] 00:37:11.014 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2128.32, stdev=63.04, samples=19 00:37:11.014 iops : min= 512, max= 544, avg=532.00, stdev=15.71, samples=19 00:37:11.014 lat (msec) : 50=100.00% 00:37:11.014 cpu : usr=99.08%, sys=0.62%, ctx=12, majf=0, minf=28 00:37:11.014 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.014 filename0: (groupid=0, jobs=1): err= 0: pid=2975624: Tue Oct 1 17:02:01 2024 00:37:11.014 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10004msec) 00:37:11.014 slat (nsec): min=7315, max=90414, avg=20934.69, stdev=14498.12 00:37:11.014 clat (usec): min=6298, max=36978, avg=29453.08, stdev=2402.83 00:37:11.014 lat (usec): min=6306, max=36987, avg=29474.02, stdev=2404.57 00:37:11.014 clat percentiles (usec): 00:37:11.014 | 1.00th=[16909], 5.00th=[28967], 10.00th=[29492], 20.00th=[29492], 00:37:11.014 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.014 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.014 | 99.00th=[31065], 99.50th=[31327], 99.90th=[36963], 99.95th=[36963], 00:37:11.014 | 99.99th=[36963] 00:37:11.014 bw ( KiB/s): min= 2048, max= 2608, per=4.20%, avg=2158.32, stdev=124.50, samples=19 00:37:11.014 iops : min= 512, max= 652, avg=539.58, stdev=31.12, samples=19 00:37:11.014 lat (msec) : 10=0.41%, 20=0.85%, 50=98.74% 00:37:11.014 cpu : usr=98.98%, sys=0.68%, ctx=15, majf=0, minf=38 00:37:11.014 IO depths : 1=5.9%, 2=11.8%, 4=23.9%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:11.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.014 filename0: (groupid=0, jobs=1): err= 0: pid=2975625: Tue Oct 1 17:02:01 2024 00:37:11.014 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:37:11.014 slat (nsec): min=5622, max=84264, avg=14286.38, stdev=12902.11 00:37:11.014 clat (usec): min=12494, max=37916, avg=29867.72, stdev=1058.33 00:37:11.014 lat (usec): min=12511, max=37947, avg=29882.01, stdev=1057.81 00:37:11.014 clat percentiles (usec): 00:37:11.014 | 1.00th=[27395], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:11.014 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:11.014 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:11.014 | 99.00th=[31065], 99.50th=[31327], 99.90th=[34341], 99.95th=[36439], 00:37:11.014 | 99.99th=[38011] 00:37:11.014 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:37:11.014 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:37:11.014 lat (msec) : 20=0.34%, 50=99.66% 00:37:11.014 cpu : usr=99.20%, sys=0.54%, ctx=13, majf=0, minf=45 00:37:11.014 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.014 filename0: (groupid=0, jobs=1): err= 0: pid=2975626: Tue Oct 1 17:02:01 2024 00:37:11.014 read: IOPS=547, BW=2189KiB/s (2241kB/s)(21.4MiB/10005msec) 00:37:11.014 slat (usec): min=6, max=125, avg=18.29, stdev=18.50 00:37:11.014 clat (usec): min=4394, max=61776, avg=29161.78, stdev=4285.63 00:37:11.014 lat (usec): min=4402, max=61794, avg=29180.07, stdev=4283.93 00:37:11.014 clat percentiles (usec): 00:37:11.014 | 1.00th=[18220], 5.00th=[21890], 10.00th=[23725], 20.00th=[26346], 00:37:11.014 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.014 | 70.00th=[30016], 80.00th=[30540], 90.00th=[33817], 95.00th=[35914], 00:37:11.014 | 99.00th=[38536], 99.50th=[41681], 99.90th=[53216], 99.95th=[61604], 00:37:11.014 | 99.99th=[61604] 00:37:11.014 bw ( KiB/s): min= 1920, max= 2336, per=4.24%, avg=2179.95, stdev=78.62, samples=19 00:37:11.014 iops : min= 480, max= 584, avg=544.95, stdev=19.67, samples=19 00:37:11.014 lat (msec) : 10=0.29%, 20=2.94%, 50=96.47%, 100=0.29% 00:37:11.014 cpu : usr=98.98%, sys=0.69%, ctx=17, majf=0, minf=23 00:37:11.014 IO depths : 1=0.6%, 2=1.2%, 4=4.4%, 8=78.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:37:11.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 complete : 0=0.0%, 4=89.3%, 8=8.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.014 issued rwts: total=5474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.014 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.014 filename1: (groupid=0, jobs=1): err= 0: pid=2975627: Tue Oct 1 17:02:01 2024 00:37:11.014 read: IOPS=538, BW=2154KiB/s (2205kB/s)(21.1MiB/10018msec) 00:37:11.014 slat (usec): min=7, max=111, avg=29.38, stdev=21.31 00:37:11.015 clat (usec): min=17929, max=45444, avg=29490.86, stdev=2174.74 00:37:11.015 lat (usec): min=17937, max=45453, avg=29520.24, stdev=2176.67 00:37:11.015 clat percentiles (usec): 00:37:11.015 | 1.00th=[19530], 5.00th=[25822], 10.00th=[29230], 20.00th=[29492], 00:37:11.015 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.015 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.015 | 99.00th=[34341], 99.50th=[39060], 99.90th=[45351], 99.95th=[45351], 00:37:11.015 | 99.99th=[45351] 00:37:11.015 bw ( KiB/s): min= 2043, max= 2352, per=4.20%, avg=2156.37, stdev=80.31, samples=19 00:37:11.015 iops : min= 510, max= 588, avg=539.05, stdev=20.14, samples=19 00:37:11.015 lat (msec) : 20=1.22%, 50=98.78% 00:37:11.015 cpu : usr=99.06%, sys=0.60%, ctx=15, majf=0, minf=24 00:37:11.015 IO depths : 1=5.7%, 2=11.6%, 4=23.7%, 8=52.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:11.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 issued rwts: total=5394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.015 filename1: (groupid=0, jobs=1): err= 0: pid=2975628: Tue Oct 1 17:02:01 2024 00:37:11.015 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10010msec) 00:37:11.015 slat (usec): min=7, max=119, avg=35.28, stdev=22.64 00:37:11.015 clat (usec): min=13565, max=47458, avg=29190.21, stdev=2878.60 00:37:11.015 lat (usec): min=13574, max=47482, avg=29225.49, stdev=2883.61 00:37:11.015 clat percentiles (usec): 00:37:11.015 | 1.00th=[19268], 5.00th=[22414], 10.00th=[28181], 20.00th=[29230], 00:37:11.015 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:37:11.015 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.015 | 99.00th=[37487], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:37:11.015 | 99.99th=[47449] 00:37:11.015 bw ( KiB/s): min= 2048, max= 2352, per=4.22%, avg=2170.95, stdev=82.98, samples=19 00:37:11.015 iops : min= 512, max= 588, avg=542.74, stdev=20.74, samples=19 00:37:11.015 lat (msec) : 20=2.67%, 50=97.33% 00:37:11.015 cpu : usr=99.14%, sys=0.53%, ctx=12, majf=0, minf=28 00:37:11.015 IO depths : 1=4.1%, 2=9.5%, 4=22.2%, 8=55.7%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:11.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 issued rwts: total=5428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.015 filename1: (groupid=0, jobs=1): err= 0: pid=2975629: Tue Oct 1 17:02:01 2024 00:37:11.015 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.8MiB/10004msec) 00:37:11.015 slat (nsec): min=5564, max=96573, avg=16975.85, stdev=14565.33 00:37:11.015 clat (usec): min=3957, max=61363, avg=29931.71, stdev=2237.05 00:37:11.015 lat (usec): min=3962, max=61384, avg=29948.69, stdev=2237.35 00:37:11.015 clat percentiles (usec): 00:37:11.015 | 1.00th=[26870], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:37:11.015 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:11.015 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:11.015 | 99.00th=[31327], 99.50th=[38011], 99.90th=[52691], 99.95th=[61080], 00:37:11.015 | 99.99th=[61604] 00:37:11.015 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2124.37, stdev=58.95, samples=19 00:37:11.015 iops : min= 480, max= 544, avg=531.05, stdev=14.76, samples=19 00:37:11.015 lat (msec) : 4=0.06%, 10=0.11%, 20=0.49%, 50=99.04%, 100=0.30% 00:37:11.015 cpu : usr=98.96%, sys=0.65%, ctx=56, majf=0, minf=29 00:37:11.015 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=81.1%, 16=18.6%, 32=0.0%, >=64=0.0% 00:37:11.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 complete : 0=0.0%, 4=89.5%, 8=10.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 issued rwts: total=5337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.015 filename1: (groupid=0, jobs=1): err= 0: pid=2975630: Tue Oct 1 17:02:01 2024 00:37:11.015 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10002msec) 00:37:11.015 slat (usec): min=7, max=132, avg=36.50, stdev=23.46 00:37:11.015 clat (usec): min=25364, max=50828, avg=29755.54, stdev=1336.37 00:37:11.015 lat (usec): min=25372, max=50860, avg=29792.04, stdev=1335.47 00:37:11.015 clat percentiles (usec): 00:37:11.015 | 1.00th=[28705], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:37:11.015 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:11.015 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.015 | 99.00th=[31327], 99.50th=[38536], 99.90th=[50594], 99.95th=[50594], 00:37:11.015 | 99.99th=[50594] 00:37:11.015 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2121.84, stdev=77.51, samples=19 00:37:11.015 iops : min= 480, max= 544, avg=530.42, stdev=19.35, samples=19 00:37:11.015 lat (msec) : 50=99.70%, 100=0.30% 00:37:11.015 cpu : usr=98.98%, sys=0.67%, ctx=43, majf=0, minf=26 00:37:11.015 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.015 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.015 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.015 filename1: (groupid=0, jobs=1): err= 0: pid=2975631: Tue Oct 1 17:02:01 2024 00:37:11.015 read: IOPS=533, BW=2134KiB/s (2186kB/s)(20.9MiB/10015msec) 00:37:11.015 slat (usec): min=7, max=106, avg=16.73, stdev=15.71 00:37:11.015 clat (usec): min=19048, max=31784, avg=29850.48, stdev=772.58 00:37:11.015 lat (usec): min=19057, max=31794, avg=29867.22, stdev=769.17 00:37:11.015 clat percentiles (usec): 00:37:11.015 | 1.00th=[27919], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.015 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.015 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:11.015 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:37:11.015 | 99.99th=[31851] 00:37:11.016 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2135.32, stdev=61.54, samples=19 00:37:11.016 iops : min= 510, max= 544, avg=533.79, stdev=15.45, samples=19 00:37:11.016 lat (msec) : 20=0.30%, 50=99.70% 00:37:11.016 cpu : usr=99.04%, sys=0.62%, ctx=14, majf=0, minf=39 00:37:11.016 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.016 filename1: (groupid=0, jobs=1): err= 0: pid=2975632: Tue Oct 1 17:02:01 2024 00:37:11.016 read: IOPS=535, BW=2143KiB/s (2195kB/s)(20.9MiB/10003msec) 00:37:11.016 slat (usec): min=7, max=100, avg=17.86, stdev=14.73 00:37:11.016 clat (usec): min=6495, max=37272, avg=29708.18, stdev=1874.20 00:37:11.016 lat (usec): min=6506, max=37350, avg=29726.04, stdev=1873.91 00:37:11.016 clat percentiles (usec): 00:37:11.016 | 1.00th=[21365], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.016 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.016 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:11.016 | 99.00th=[31327], 99.50th=[31589], 99.90th=[36963], 99.95th=[36963], 00:37:11.016 | 99.99th=[37487] 00:37:11.016 bw ( KiB/s): min= 2048, max= 2308, per=4.17%, avg=2142.53, stdev=72.43, samples=19 00:37:11.016 iops : min= 512, max= 577, avg=535.63, stdev=18.11, samples=19 00:37:11.016 lat (msec) : 10=0.30%, 20=0.67%, 50=99.03% 00:37:11.016 cpu : usr=99.05%, sys=0.61%, ctx=15, majf=0, minf=51 00:37:11.016 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.016 filename1: (groupid=0, jobs=1): err= 0: pid=2975633: Tue Oct 1 17:02:01 2024 00:37:11.016 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10004msec) 00:37:11.016 slat (nsec): min=6953, max=94895, avg=18674.94, stdev=13063.02 00:37:11.016 clat (usec): min=13036, max=53626, avg=29800.12, stdev=2503.13 00:37:11.016 lat (usec): min=13044, max=53645, avg=29818.80, stdev=2503.18 00:37:11.016 clat percentiles (usec): 00:37:11.016 | 1.00th=[16909], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.016 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.016 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:11.016 | 99.00th=[35390], 99.50th=[46400], 99.90th=[53740], 99.95th=[53740], 00:37:11.016 | 99.99th=[53740] 00:37:11.016 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2126.89, stdev=75.53, samples=19 00:37:11.016 iops : min= 480, max= 544, avg=531.68, stdev=18.86, samples=19 00:37:11.016 lat (msec) : 20=1.03%, 50=98.67%, 100=0.30% 00:37:11.016 cpu : usr=98.86%, sys=0.80%, ctx=16, majf=0, minf=25 00:37:11.016 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 issued rwts: total=5340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.016 filename1: (groupid=0, jobs=1): err= 0: pid=2975634: Tue Oct 1 17:02:01 2024 00:37:11.016 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10014msec) 00:37:11.016 slat (nsec): min=7306, max=95602, avg=19996.93, stdev=14760.27 00:37:11.016 clat (usec): min=14359, max=34753, avg=29822.98, stdev=1004.49 00:37:11.016 lat (usec): min=14371, max=34776, avg=29842.98, stdev=1004.01 00:37:11.016 clat percentiles (usec): 00:37:11.016 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.016 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.016 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:11.016 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32637], 99.95th=[34866], 00:37:11.016 | 99.99th=[34866] 00:37:11.016 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:37:11.016 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:37:11.016 lat (msec) : 20=0.39%, 50=99.61% 00:37:11.016 cpu : usr=99.15%, sys=0.52%, ctx=13, majf=0, minf=54 00:37:11.016 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.016 filename2: (groupid=0, jobs=1): err= 0: pid=2975635: Tue Oct 1 17:02:01 2024 00:37:11.016 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10009msec) 00:37:11.016 slat (nsec): min=7290, max=85534, avg=20799.27, stdev=14526.99 00:37:11.016 clat (usec): min=9985, max=41092, avg=29754.99, stdev=1543.13 00:37:11.016 lat (usec): min=9994, max=41116, avg=29775.79, stdev=1544.08 00:37:11.016 clat percentiles (usec): 00:37:11.016 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:11.016 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.016 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.016 | 99.00th=[31065], 99.50th=[31327], 99.90th=[41157], 99.95th=[41157], 00:37:11.016 | 99.99th=[41157] 00:37:11.016 bw ( KiB/s): min= 2043, max= 2176, per=4.14%, avg=2128.58, stdev=63.80, samples=19 00:37:11.016 iops : min= 510, max= 544, avg=532.11, stdev=16.01, samples=19 00:37:11.016 lat (msec) : 10=0.04%, 20=0.56%, 50=99.40% 00:37:11.016 cpu : usr=98.99%, sys=0.66%, ctx=36, majf=0, minf=35 00:37:11.016 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.016 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.016 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.016 filename2: (groupid=0, jobs=1): err= 0: pid=2975636: Tue Oct 1 17:02:01 2024 00:37:11.016 read: IOPS=538, BW=2153KiB/s (2205kB/s)(21.1MiB/10017msec) 00:37:11.016 slat (usec): min=7, max=124, avg=23.53, stdev=18.88 00:37:11.016 clat (usec): min=3780, max=48607, avg=29531.59, stdev=2751.74 00:37:11.016 lat (usec): min=3788, max=48634, avg=29555.13, stdev=2751.97 00:37:11.016 clat percentiles (usec): 00:37:11.016 | 1.00th=[17695], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:37:11.016 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.016 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:11.016 | 99.00th=[31851], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:37:11.016 | 99.99th=[48497] 00:37:11.016 bw ( KiB/s): min= 2048, max= 2560, per=4.19%, avg=2153.26, stdev=115.05, samples=19 00:37:11.016 iops : min= 512, max= 640, avg=538.32, stdev=28.76, samples=19 00:37:11.016 lat (msec) : 4=0.13%, 10=0.76%, 20=0.70%, 50=98.41% 00:37:11.016 cpu : usr=98.77%, sys=0.87%, ctx=29, majf=0, minf=31 00:37:11.016 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:11.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975637: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10016msec) 00:37:11.017 slat (nsec): min=7276, max=74566, avg=16670.61, stdev=10833.57 00:37:11.017 clat (usec): min=22452, max=32063, avg=29844.67, stdev=666.99 00:37:11.017 lat (usec): min=22460, max=32072, avg=29861.34, stdev=666.63 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:11.017 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.017 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:11.017 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[32113], 00:37:11.017 | 99.99th=[32113] 00:37:11.017 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:37:11.017 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:37:11.017 lat (msec) : 50=100.00% 00:37:11.017 cpu : usr=99.01%, sys=0.65%, ctx=13, majf=0, minf=26 00:37:11.017 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975638: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=533, BW=2135KiB/s (2187kB/s)(20.9MiB/10010msec) 00:37:11.017 slat (usec): min=7, max=117, avg=37.54, stdev=19.73 00:37:11.017 clat (usec): min=9592, max=58387, avg=29616.36, stdev=2104.46 00:37:11.017 lat (usec): min=9599, max=58409, avg=29653.90, stdev=2105.41 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[25297], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:11.017 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:11.017 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:37:11.017 | 99.00th=[31327], 99.50th=[35914], 99.90th=[48497], 99.95th=[58459], 00:37:11.017 | 99.99th=[58459] 00:37:11.017 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2128.58, stdev=76.29, samples=19 00:37:11.017 iops : min= 480, max= 544, avg=532.11, stdev=19.05, samples=19 00:37:11.017 lat (msec) : 10=0.30%, 20=0.30%, 50=99.31%, 100=0.09% 00:37:11.017 cpu : usr=98.89%, sys=0.77%, ctx=15, majf=0, minf=23 00:37:11.017 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975639: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10022msec) 00:37:11.017 slat (usec): min=5, max=126, avg=33.10, stdev=23.33 00:37:11.017 clat (usec): min=19031, max=35086, avg=29733.58, stdev=821.81 00:37:11.017 lat (usec): min=19047, max=35116, avg=29766.68, stdev=818.47 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[27919], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:11.017 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.017 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.017 | 99.00th=[31065], 99.50th=[31851], 99.90th=[34866], 99.95th=[34866], 00:37:11.017 | 99.99th=[34866] 00:37:11.017 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2131.20, stdev=62.64, samples=20 00:37:11.017 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:37:11.017 lat (msec) : 20=0.30%, 50=99.70% 00:37:11.017 cpu : usr=99.23%, sys=0.49%, ctx=16, majf=0, minf=43 00:37:11.017 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975640: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10022msec) 00:37:11.017 slat (usec): min=7, max=109, avg=35.30, stdev=19.63 00:37:11.017 clat (usec): min=19291, max=39833, avg=29708.56, stdev=923.35 00:37:11.017 lat (usec): min=19331, max=39841, avg=29743.86, stdev=922.07 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[27395], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:37:11.017 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:11.017 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.017 | 99.00th=[31065], 99.50th=[32637], 99.90th=[39584], 99.95th=[39584], 00:37:11.017 | 99.99th=[39584] 00:37:11.017 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2131.20, stdev=62.64, samples=20 00:37:11.017 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:37:11.017 lat (msec) : 20=0.26%, 50=99.74% 00:37:11.017 cpu : usr=98.97%, sys=0.70%, ctx=15, majf=0, minf=26 00:37:11.017 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975641: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10004msec) 00:37:11.017 slat (usec): min=7, max=123, avg=20.23, stdev=17.36 00:37:11.017 clat (usec): min=6334, max=68409, avg=29368.95, stdev=4161.04 00:37:11.017 lat (usec): min=6342, max=68431, avg=29389.17, stdev=4161.91 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[15139], 5.00th=[22152], 10.00th=[25560], 20.00th=[29492], 00:37:11.017 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:11.017 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[34866], 00:37:11.017 | 99.00th=[43254], 99.50th=[46924], 99.90th=[52691], 99.95th=[68682], 00:37:11.017 | 99.99th=[68682] 00:37:11.017 bw ( KiB/s): min= 1984, max= 2400, per=4.21%, avg=2162.26, stdev=94.72, samples=19 00:37:11.017 iops : min= 496, max= 600, avg=540.53, stdev=23.69, samples=19 00:37:11.017 lat (msec) : 10=0.29%, 20=3.01%, 50=96.31%, 100=0.39% 00:37:11.017 cpu : usr=98.48%, sys=1.03%, ctx=92, majf=0, minf=29 00:37:11.017 IO depths : 1=2.4%, 2=5.1%, 4=12.3%, 8=67.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=91.2%, 8=5.3%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 filename2: (groupid=0, jobs=1): err= 0: pid=2975642: Tue Oct 1 17:02:01 2024 00:37:11.017 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10004msec) 00:37:11.017 slat (usec): min=7, max=144, avg=27.74, stdev=23.19 00:37:11.017 clat (usec): min=4954, max=72252, avg=29732.55, stdev=3161.88 00:37:11.017 lat (usec): min=4961, max=72274, avg=29760.29, stdev=3162.13 00:37:11.017 clat percentiles (usec): 00:37:11.017 | 1.00th=[21365], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:37:11.017 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:11.017 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:11.017 | 99.00th=[35390], 99.50th=[38011], 99.90th=[71828], 99.95th=[71828], 00:37:11.017 | 99.99th=[71828] 00:37:11.017 bw ( KiB/s): min= 1923, max= 2192, per=4.14%, avg=2125.37, stdev=72.43, samples=19 00:37:11.017 iops : min= 480, max= 548, avg=531.26, stdev=18.27, samples=19 00:37:11.017 lat (msec) : 10=0.30%, 20=0.67%, 50=98.73%, 100=0.30% 00:37:11.017 cpu : usr=98.84%, sys=0.79%, ctx=46, majf=0, minf=29 00:37:11.017 IO depths : 1=3.4%, 2=6.8%, 4=13.8%, 8=64.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:37:11.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 complete : 0=0.0%, 4=91.8%, 8=5.0%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.017 issued rwts: total=5342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:11.017 00:37:11.017 Run status group 0 (all jobs): 00:37:11.017 READ: bw=50.2MiB/s (52.6MB/s), 2124KiB/s-2189KiB/s (2175kB/s-2241kB/s), io=503MiB (527MB), run=10002-10027msec 00:37:11.017 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:11.017 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:11.017 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 bdev_null0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 [2024-10-01 17:02:01.427669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 bdev_null1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:11.018 { 00:37:11.018 "params": { 00:37:11.018 "name": "Nvme$subsystem", 00:37:11.018 "trtype": "$TEST_TRANSPORT", 00:37:11.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.018 "adrfam": "ipv4", 00:37:11.018 "trsvcid": "$NVMF_PORT", 00:37:11.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.018 "hdgst": ${hdgst:-false}, 00:37:11.018 "ddgst": ${ddgst:-false} 00:37:11.018 }, 00:37:11.018 "method": "bdev_nvme_attach_controller" 00:37:11.018 } 00:37:11.018 EOF 00:37:11.018 )") 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:11.018 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:11.019 { 00:37:11.019 "params": { 00:37:11.019 "name": "Nvme$subsystem", 00:37:11.019 "trtype": "$TEST_TRANSPORT", 00:37:11.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.019 "adrfam": "ipv4", 00:37:11.019 "trsvcid": "$NVMF_PORT", 00:37:11.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.019 "hdgst": ${hdgst:-false}, 00:37:11.019 "ddgst": ${ddgst:-false} 00:37:11.019 }, 00:37:11.019 "method": "bdev_nvme_attach_controller" 00:37:11.019 } 00:37:11.019 EOF 00:37:11.019 )") 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:11.019 "params": { 00:37:11.019 "name": "Nvme0", 00:37:11.019 "trtype": "tcp", 00:37:11.019 "traddr": "10.0.0.2", 00:37:11.019 "adrfam": "ipv4", 00:37:11.019 "trsvcid": "4420", 00:37:11.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.019 "hdgst": false, 00:37:11.019 "ddgst": false 00:37:11.019 }, 00:37:11.019 "method": "bdev_nvme_attach_controller" 00:37:11.019 },{ 00:37:11.019 "params": { 00:37:11.019 "name": "Nvme1", 00:37:11.019 "trtype": "tcp", 00:37:11.019 "traddr": "10.0.0.2", 00:37:11.019 "adrfam": "ipv4", 00:37:11.019 "trsvcid": "4420", 00:37:11.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.019 "hdgst": false, 00:37:11.019 "ddgst": false 00:37:11.019 }, 00:37:11.019 "method": "bdev_nvme_attach_controller" 00:37:11.019 }' 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:11.019 17:02:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.019 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:11.019 ... 00:37:11.019 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:11.019 ... 00:37:11.019 fio-3.35 00:37:11.019 Starting 4 threads 00:37:16.304 00:37:16.304 filename0: (groupid=0, jobs=1): err= 0: pid=2977611: Tue Oct 1 17:02:07 2024 00:37:16.304 read: IOPS=2491, BW=19.5MiB/s (20.4MB/s)(97.4MiB/5003msec) 00:37:16.304 slat (nsec): min=2865, max=24139, avg=6043.85, stdev=1055.91 00:37:16.304 clat (usec): min=1109, max=5776, avg=3193.62, stdev=523.93 00:37:16.304 lat (usec): min=1116, max=5789, avg=3199.66, stdev=523.87 00:37:16.304 clat percentiles (usec): 00:37:16.304 | 1.00th=[ 2442], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2835], 00:37:16.304 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 3032], 60.00th=[ 3130], 00:37:16.304 | 70.00th=[ 3294], 80.00th=[ 3556], 90.00th=[ 3884], 95.00th=[ 4490], 00:37:16.304 | 99.00th=[ 4752], 99.50th=[ 4752], 99.90th=[ 4817], 99.95th=[ 5669], 00:37:16.304 | 99.99th=[ 5669] 00:37:16.304 bw ( KiB/s): min=18880, max=21792, per=27.65%, avg=19931.20, stdev=783.29, samples=10 00:37:16.304 iops : min= 2360, max= 2724, avg=2491.40, stdev=97.91, samples=10 00:37:16.304 lat (msec) : 2=0.43%, 4=90.10%, 10=9.47% 00:37:16.304 cpu : usr=97.26%, sys=2.36%, ctx=150, majf=0, minf=9 00:37:16.304 IO depths : 1=0.1%, 2=5.1%, 4=66.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 issued rwts: total=12464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.304 filename0: (groupid=0, jobs=1): err= 0: pid=2977612: Tue Oct 1 17:02:07 2024 00:37:16.304 read: IOPS=2188, BW=17.1MiB/s (17.9MB/s)(85.5MiB/5003msec) 00:37:16.304 slat (nsec): min=2886, max=25496, avg=6206.53, stdev=1654.06 00:37:16.304 clat (usec): min=1736, max=6051, avg=3638.45, stdev=305.25 00:37:16.304 lat (usec): min=1755, max=6062, avg=3644.66, stdev=305.26 00:37:16.304 clat percentiles (usec): 00:37:16.304 | 1.00th=[ 3064], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3490], 00:37:16.304 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:37:16.304 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 3916], 95.00th=[ 4228], 00:37:16.304 | 99.00th=[ 4621], 99.50th=[ 5211], 99.90th=[ 5735], 99.95th=[ 5800], 00:37:16.304 | 99.99th=[ 5997] 00:37:16.304 bw ( KiB/s): min=16832, max=17936, per=24.29%, avg=17505.90, stdev=292.15, samples=10 00:37:16.304 iops : min= 2104, max= 2242, avg=2188.20, stdev=36.50, samples=10 00:37:16.304 lat (msec) : 2=0.09%, 4=91.06%, 10=8.85% 00:37:16.304 cpu : usr=96.14%, sys=3.62%, ctx=6, majf=0, minf=9 00:37:16.304 IO depths : 1=0.1%, 2=0.1%, 4=72.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 issued rwts: total=10947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.304 filename1: (groupid=0, jobs=1): err= 0: pid=2977613: Tue Oct 1 17:02:07 2024 00:37:16.304 read: IOPS=2195, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5002msec) 00:37:16.304 slat (nsec): min=7250, max=31340, avg=8038.37, stdev=1921.02 00:37:16.304 clat (usec): min=1340, max=9799, avg=3624.91, stdev=322.67 00:37:16.304 lat (usec): min=1348, max=9830, avg=3632.95, stdev=322.86 00:37:16.304 clat percentiles (usec): 00:37:16.304 | 1.00th=[ 3064], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3490], 00:37:16.304 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:37:16.304 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 3916], 95.00th=[ 4178], 00:37:16.304 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 6063], 99.95th=[ 8094], 00:37:16.304 | 99.99th=[ 9765] 00:37:16.304 bw ( KiB/s): min=16944, max=17984, per=24.37%, avg=17562.67, stdev=288.33, samples=9 00:37:16.304 iops : min= 2118, max= 2248, avg=2195.33, stdev=36.04, samples=9 00:37:16.304 lat (msec) : 2=0.36%, 4=91.81%, 10=7.82% 00:37:16.304 cpu : usr=96.98%, sys=2.80%, ctx=6, majf=0, minf=9 00:37:16.304 IO depths : 1=0.1%, 2=0.1%, 4=66.8%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 issued rwts: total=10982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.304 filename1: (groupid=0, jobs=1): err= 0: pid=2977614: Tue Oct 1 17:02:07 2024 00:37:16.304 read: IOPS=2135, BW=16.7MiB/s (17.5MB/s)(83.4MiB/5001msec) 00:37:16.304 slat (nsec): min=7257, max=31454, avg=7879.93, stdev=1727.79 00:37:16.304 clat (usec): min=1451, max=7132, avg=3724.30, stdev=485.10 00:37:16.304 lat (usec): min=1458, max=7164, avg=3732.18, stdev=485.00 00:37:16.304 clat percentiles (usec): 00:37:16.304 | 1.00th=[ 3130], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3458], 00:37:16.304 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3687], 00:37:16.304 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4228], 95.00th=[ 5014], 00:37:16.304 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 5866], 99.95th=[ 5866], 00:37:16.304 | 99.99th=[ 6128] 00:37:16.304 bw ( KiB/s): min=16448, max=17282, per=23.67%, avg=17059.78, stdev=247.29, samples=9 00:37:16.304 iops : min= 2056, max= 2160, avg=2132.44, stdev=30.88, samples=9 00:37:16.304 lat (msec) : 2=0.07%, 4=87.66%, 10=12.28% 00:37:16.304 cpu : usr=97.04%, sys=2.70%, ctx=8, majf=0, minf=9 00:37:16.304 IO depths : 1=0.1%, 2=0.1%, 4=74.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.304 issued rwts: total=10678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.304 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:16.304 00:37:16.304 Run status group 0 (all jobs): 00:37:16.304 READ: bw=70.4MiB/s (73.8MB/s), 16.7MiB/s-19.5MiB/s (17.5MB/s-20.4MB/s), io=352MiB (369MB), run=5001-5003msec 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.304 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 00:37:16.305 real 0m24.556s 00:37:16.305 user 5m1.632s 00:37:16.305 sys 0m3.969s 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 ************************************ 00:37:16.305 END TEST fio_dif_rand_params 00:37:16.305 ************************************ 00:37:16.305 17:02:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:16.305 17:02:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:16.305 17:02:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 ************************************ 00:37:16.305 START TEST fio_dif_digest 00:37:16.305 ************************************ 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 bdev_null0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:16.305 [2024-10-01 17:02:07.955074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:16.305 { 00:37:16.305 "params": { 00:37:16.305 "name": "Nvme$subsystem", 00:37:16.305 "trtype": "$TEST_TRANSPORT", 00:37:16.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.305 "adrfam": "ipv4", 00:37:16.305 "trsvcid": "$NVMF_PORT", 00:37:16.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.305 "hdgst": ${hdgst:-false}, 00:37:16.305 "ddgst": ${ddgst:-false} 00:37:16.305 }, 00:37:16.305 "method": "bdev_nvme_attach_controller" 00:37:16.305 } 00:37:16.305 EOF 00:37:16.305 )") 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:37:16.305 17:02:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:16.305 "params": { 00:37:16.305 "name": "Nvme0", 00:37:16.305 "trtype": "tcp", 00:37:16.305 "traddr": "10.0.0.2", 00:37:16.305 "adrfam": "ipv4", 00:37:16.305 "trsvcid": "4420", 00:37:16.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.305 "hdgst": true, 00:37:16.305 "ddgst": true 00:37:16.305 }, 00:37:16.305 "method": "bdev_nvme_attach_controller" 00:37:16.305 }' 00:37:16.565 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:16.565 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:16.565 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:16.566 17:02:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.832 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:16.832 ... 00:37:16.832 fio-3.35 00:37:16.832 Starting 3 threads 00:37:29.063 00:37:29.063 filename0: (groupid=0, jobs=1): err= 0: pid=2978824: Tue Oct 1 17:02:18 2024 00:37:29.063 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(396MiB/10046msec) 00:37:29.063 slat (nsec): min=7567, max=56369, avg=8391.31, stdev=1239.60 00:37:29.063 clat (usec): min=4923, max=48437, avg=9475.92, stdev=2287.10 00:37:29.063 lat (usec): min=4932, max=48446, avg=9484.31, stdev=2287.18 00:37:29.063 clat percentiles (usec): 00:37:29.063 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 7373], 00:37:29.063 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[10028], 00:37:29.063 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12518], 95.00th=[12911], 00:37:29.063 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14484], 99.95th=[15270], 00:37:29.063 | 99.99th=[48497] 00:37:29.063 bw ( KiB/s): min=35840, max=44288, per=46.97%, avg=40537.60, stdev=2393.26, samples=20 00:37:29.063 iops : min= 280, max= 346, avg=316.70, stdev=18.70, samples=20 00:37:29.063 lat (msec) : 10=59.82%, 20=40.15%, 50=0.03% 00:37:29.063 cpu : usr=94.71%, sys=5.02%, ctx=16, majf=0, minf=123 00:37:29.063 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.063 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.063 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.063 filename0: (groupid=0, jobs=1): err= 0: pid=2978825: Tue Oct 1 17:02:18 2024 00:37:29.063 read: IOPS=180, BW=22.5MiB/s (23.6MB/s)(226MiB/10039msec) 00:37:29.063 slat (usec): min=7, max=130, avg= 8.56, stdev= 3.01 00:37:29.063 clat (usec): min=6634, max=94475, avg=16636.45, stdev=14621.62 00:37:29.063 lat (usec): min=6642, max=94483, avg=16645.02, stdev=14621.85 00:37:29.063 clat percentiles (usec): 00:37:29.063 | 1.00th=[ 7439], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9241], 00:37:29.063 | 30.00th=[10290], 40.00th=[11338], 50.00th=[11994], 60.00th=[12387], 00:37:29.063 | 70.00th=[12911], 80.00th=[13698], 90.00th=[50594], 95.00th=[52691], 00:37:29.063 | 99.00th=[54789], 99.50th=[55837], 99.90th=[94897], 99.95th=[94897], 00:37:29.063 | 99.99th=[94897] 00:37:29.064 bw ( KiB/s): min=13568, max=31744, per=26.79%, avg=23116.80, stdev=4240.08, samples=20 00:37:29.064 iops : min= 106, max= 248, avg=180.60, stdev=33.13, samples=20 00:37:29.064 lat (msec) : 10=27.75%, 20=59.31%, 50=1.82%, 100=11.11% 00:37:29.064 cpu : usr=95.01%, sys=4.73%, ctx=23, majf=0, minf=205 00:37:29.064 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.064 issued rwts: total=1809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.064 filename0: (groupid=0, jobs=1): err= 0: pid=2978826: Tue Oct 1 17:02:18 2024 00:37:29.064 read: IOPS=178, BW=22.4MiB/s (23.4MB/s)(225MiB/10047msec) 00:37:29.064 slat (nsec): min=7624, max=34566, avg=8432.68, stdev=955.23 00:37:29.064 clat (usec): min=6561, max=94290, avg=16738.89, stdev=14813.29 00:37:29.064 lat (usec): min=6569, max=94299, avg=16747.33, stdev=14813.32 00:37:29.064 clat percentiles (usec): 00:37:29.064 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9372], 00:37:29.064 | 30.00th=[10290], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:37:29.064 | 70.00th=[13173], 80.00th=[13960], 90.00th=[50594], 95.00th=[53216], 00:37:29.064 | 99.00th=[54789], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:37:29.064 | 99.99th=[93848] 00:37:29.064 bw ( KiB/s): min=14592, max=33280, per=26.62%, avg=22976.00, stdev=4194.91, samples=20 00:37:29.064 iops : min= 114, max= 260, avg=179.50, stdev=32.77, samples=20 00:37:29.064 lat (msec) : 10=27.32%, 20=60.32%, 50=1.45%, 100=10.91% 00:37:29.064 cpu : usr=94.99%, sys=4.75%, ctx=22, majf=0, minf=139 00:37:29.064 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.064 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.064 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:29.064 00:37:29.064 Run status group 0 (all jobs): 00:37:29.064 READ: bw=84.3MiB/s (88.4MB/s), 22.4MiB/s-39.4MiB/s (23.4MB/s-41.3MB/s), io=847MiB (888MB), run=10039-10047msec 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.064 00:37:29.064 real 0m11.217s 00:37:29.064 user 0m38.176s 00:37:29.064 sys 0m1.741s 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.064 17:02:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:29.064 ************************************ 00:37:29.064 END TEST fio_dif_digest 00:37:29.064 ************************************ 00:37:29.064 17:02:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:29.064 17:02:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:29.064 rmmod nvme_tcp 00:37:29.064 rmmod nvme_fabrics 00:37:29.064 rmmod nvme_keyring 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2969799 ']' 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2969799 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2969799 ']' 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2969799 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2969799 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2969799' 00:37:29.064 killing process with pid 2969799 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2969799 00:37:29.064 17:02:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2969799 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:29.064 17:02:19 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:31.610 Waiting for block devices as requested 00:37:31.610 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:31.610 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:31.610 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:31.610 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:31.610 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:31.610 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:31.870 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:31.870 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:31.870 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:37:32.131 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:32.131 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:32.392 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:32.392 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:32.392 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:32.392 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:32.651 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:32.651 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:32.911 17:02:24 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.911 17:02:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:32.911 17:02:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.456 17:02:26 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:35.456 00:37:35.456 real 1m17.602s 00:37:35.456 user 7m28.480s 00:37:35.456 sys 0m20.781s 00:37:35.456 17:02:26 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:35.456 17:02:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:35.456 ************************************ 00:37:35.456 END TEST nvmf_dif 00:37:35.456 ************************************ 00:37:35.456 17:02:26 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:35.456 17:02:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:35.456 17:02:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:35.456 17:02:26 -- common/autotest_common.sh@10 -- # set +x 00:37:35.456 ************************************ 00:37:35.456 START TEST nvmf_abort_qd_sizes 00:37:35.456 ************************************ 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:35.456 * Looking for test storage... 00:37:35.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:35.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.456 --rc genhtml_branch_coverage=1 00:37:35.456 --rc genhtml_function_coverage=1 00:37:35.456 --rc genhtml_legend=1 00:37:35.456 --rc geninfo_all_blocks=1 00:37:35.456 --rc geninfo_unexecuted_blocks=1 00:37:35.456 00:37:35.456 ' 00:37:35.456 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:35.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.456 --rc genhtml_branch_coverage=1 00:37:35.456 --rc genhtml_function_coverage=1 00:37:35.456 --rc genhtml_legend=1 00:37:35.457 --rc geninfo_all_blocks=1 00:37:35.457 --rc geninfo_unexecuted_blocks=1 00:37:35.457 00:37:35.457 ' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:35.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.457 --rc genhtml_branch_coverage=1 00:37:35.457 --rc genhtml_function_coverage=1 00:37:35.457 --rc genhtml_legend=1 00:37:35.457 --rc geninfo_all_blocks=1 00:37:35.457 --rc geninfo_unexecuted_blocks=1 00:37:35.457 00:37:35.457 ' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:35.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.457 --rc genhtml_branch_coverage=1 00:37:35.457 --rc genhtml_function_coverage=1 00:37:35.457 --rc genhtml_legend=1 00:37:35.457 --rc geninfo_all_blocks=1 00:37:35.457 --rc geninfo_unexecuted_blocks=1 00:37:35.457 00:37:35.457 ' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:35.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:35.457 17:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:43.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:43.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:43.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.593 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:43.593 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.594 17:02:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:37:43.594 00:37:43.594 --- 10.0.0.2 ping statistics --- 00:37:43.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.594 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:37:43.594 00:37:43.594 --- 10.0.0.1 ping statistics --- 00:37:43.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.594 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:37:43.594 17:02:34 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:46.135 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:46.135 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:46.395 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:48.305 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2987828 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2987828 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2987828 ']' 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:48.567 17:02:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.567 [2024-10-01 17:02:40.123925] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:37:48.567 [2024-10-01 17:02:40.124008] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.567 [2024-10-01 17:02:40.213054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:48.826 [2024-10-01 17:02:40.306340] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.826 [2024-10-01 17:02:40.306402] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.826 [2024-10-01 17:02:40.306410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.826 [2024-10-01 17:02:40.306417] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.826 [2024-10-01 17:02:40.306423] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.826 [2024-10-01 17:02:40.306555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.826 [2024-10-01 17:02:40.306687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:48.826 [2024-10-01 17:02:40.306812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:48.826 [2024-10-01 17:02:40.306814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.394 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.395 17:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.654 ************************************ 00:37:49.654 START TEST spdk_target_abort 00:37:49.654 ************************************ 00:37:49.654 17:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:37:49.654 17:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:49.654 17:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:49.654 17:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.655 17:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.946 spdk_targetn1 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.946 [2024-10-01 17:02:43.929963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.946 [2024-10-01 17:02:43.955193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:52.946 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:52.947 17:02:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:56.244 Initializing NVMe Controllers 00:37:56.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:56.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:56.244 Initialization complete. Launching workers. 00:37:56.244 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11814, failed: 0 00:37:56.244 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2433, failed to submit 9381 00:37:56.244 success 694, unsuccessful 1739, failed 0 00:37:56.244 17:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:56.245 17:02:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:59.546 Initializing NVMe Controllers 00:37:59.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:59.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:59.546 Initialization complete. Launching workers. 00:37:59.546 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8742, failed: 0 00:37:59.546 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7527 00:37:59.546 success 341, unsuccessful 874, failed 0 00:37:59.546 17:02:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.546 17:02:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.091 Initializing NVMe Controllers 00:38:02.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:02.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:02.091 Initialization complete. Launching workers. 00:38:02.091 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41535, failed: 0 00:38:02.091 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2556, failed to submit 38979 00:38:02.091 success 577, unsuccessful 1979, failed 0 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.091 17:02:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2987828 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2987828 ']' 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2987828 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:04.634 17:02:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2987828 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2987828' 00:38:04.634 killing process with pid 2987828 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2987828 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2987828 00:38:04.634 00:38:04.634 real 0m15.049s 00:38:04.634 user 1m0.623s 00:38:04.634 sys 0m2.043s 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:04.634 ************************************ 00:38:04.634 END TEST spdk_target_abort 00:38:04.634 ************************************ 00:38:04.634 17:02:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:04.634 17:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:04.634 17:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:04.634 17:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:04.634 ************************************ 00:38:04.634 START TEST kernel_target_abort 00:38:04.634 ************************************ 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:04.634 17:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:07.937 Waiting for block devices as requested 00:38:07.937 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:07.937 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:07.937 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:07.937 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:07.937 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:08.199 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:08.199 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:08.199 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:08.460 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:38:08.460 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:08.721 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:08.721 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:08.721 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:08.982 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:08.982 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:08.982 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:09.242 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:09.504 No valid GPT data, bailing 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:09.504 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:38:09.766 00:38:09.766 Discovery Log Number of Records 2, Generation counter 2 00:38:09.766 =====Discovery Log Entry 0====== 00:38:09.766 trtype: tcp 00:38:09.766 adrfam: ipv4 00:38:09.766 subtype: current discovery subsystem 00:38:09.766 treq: not specified, sq flow control disable supported 00:38:09.766 portid: 1 00:38:09.766 trsvcid: 4420 00:38:09.766 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:09.766 traddr: 10.0.0.1 00:38:09.766 eflags: none 00:38:09.766 sectype: none 00:38:09.766 =====Discovery Log Entry 1====== 00:38:09.766 trtype: tcp 00:38:09.766 adrfam: ipv4 00:38:09.766 subtype: nvme subsystem 00:38:09.766 treq: not specified, sq flow control disable supported 00:38:09.766 portid: 1 00:38:09.766 trsvcid: 4420 00:38:09.766 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:09.766 traddr: 10.0.0.1 00:38:09.766 eflags: none 00:38:09.766 sectype: none 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.766 17:03:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:13.071 Initializing NVMe Controllers 00:38:13.071 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:13.071 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:13.071 Initialization complete. Launching workers. 00:38:13.071 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65316, failed: 0 00:38:13.071 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65316, failed to submit 0 00:38:13.071 success 0, unsuccessful 65316, failed 0 00:38:13.071 17:03:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:13.071 17:03:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:16.369 Initializing NVMe Controllers 00:38:16.369 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:16.369 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:16.369 Initialization complete. Launching workers. 00:38:16.369 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114012, failed: 0 00:38:16.369 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23742, failed to submit 90270 00:38:16.369 success 0, unsuccessful 23742, failed 0 00:38:16.369 17:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:16.369 17:03:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:18.914 Initializing NVMe Controllers 00:38:18.914 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:18.914 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:18.914 Initialization complete. Launching workers. 00:38:18.914 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104834, failed: 0 00:38:18.914 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26222, failed to submit 78612 00:38:18.914 success 0, unsuccessful 26222, failed 0 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:38:18.914 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:38:19.175 17:03:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:22.481 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:22.481 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:22.742 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:24.655 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:38:24.916 00:38:24.916 real 0m20.267s 00:38:24.916 user 0m9.355s 00:38:24.916 sys 0m6.279s 00:38:24.916 17:03:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.916 17:03:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:24.916 ************************************ 00:38:24.916 END TEST kernel_target_abort 00:38:24.916 ************************************ 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.916 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.916 rmmod nvme_tcp 00:38:24.916 rmmod nvme_fabrics 00:38:24.916 rmmod nvme_keyring 00:38:25.177 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2987828 ']' 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2987828 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2987828 ']' 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2987828 00:38:25.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2987828) - No such process 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2987828 is not found' 00:38:25.178 Process with pid 2987828 is not found 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:38:25.178 17:03:16 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:28.495 Waiting for block devices as requested 00:38:28.495 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:28.757 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:28.757 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:28.757 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:29.020 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:29.020 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:29.020 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:29.280 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:29.280 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:38:29.540 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:29.540 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:29.540 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:29.800 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:29.800 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:29.800 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:29.800 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:30.059 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:30.319 17:03:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.866 17:03:23 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:32.866 00:38:32.866 real 0m57.243s 00:38:32.866 user 1m15.450s 00:38:32.866 sys 0m19.646s 00:38:32.866 17:03:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.866 17:03:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:32.866 ************************************ 00:38:32.866 END TEST nvmf_abort_qd_sizes 00:38:32.866 ************************************ 00:38:32.866 17:03:23 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:32.866 17:03:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:32.866 17:03:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:32.866 17:03:24 -- common/autotest_common.sh@10 -- # set +x 00:38:32.866 ************************************ 00:38:32.866 START TEST keyring_file 00:38:32.866 ************************************ 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:32.866 * Looking for test storage... 00:38:32.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.866 --rc genhtml_branch_coverage=1 00:38:32.866 --rc genhtml_function_coverage=1 00:38:32.866 --rc genhtml_legend=1 00:38:32.866 --rc geninfo_all_blocks=1 00:38:32.866 --rc geninfo_unexecuted_blocks=1 00:38:32.866 00:38:32.866 ' 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.866 --rc genhtml_branch_coverage=1 00:38:32.866 --rc genhtml_function_coverage=1 00:38:32.866 --rc genhtml_legend=1 00:38:32.866 --rc geninfo_all_blocks=1 00:38:32.866 --rc geninfo_unexecuted_blocks=1 00:38:32.866 00:38:32.866 ' 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.866 --rc genhtml_branch_coverage=1 00:38:32.866 --rc genhtml_function_coverage=1 00:38:32.866 --rc genhtml_legend=1 00:38:32.866 --rc geninfo_all_blocks=1 00:38:32.866 --rc geninfo_unexecuted_blocks=1 00:38:32.866 00:38:32.866 ' 00:38:32.866 17:03:24 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:32.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.866 --rc genhtml_branch_coverage=1 00:38:32.866 --rc genhtml_function_coverage=1 00:38:32.866 --rc genhtml_legend=1 00:38:32.866 --rc geninfo_all_blocks=1 00:38:32.866 --rc geninfo_unexecuted_blocks=1 00:38:32.866 00:38:32.866 ' 00:38:32.866 17:03:24 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:32.866 17:03:24 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.866 17:03:24 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.866 17:03:24 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.866 17:03:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.866 17:03:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.867 17:03:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.867 17:03:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:32.867 17:03:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:32.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1Gi49Ao8A1 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@731 -- # python - 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1Gi49Ao8A1 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1Gi49Ao8A1 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1Gi49Ao8A1 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GiSmDL5Z0W 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:38:32.867 17:03:24 keyring_file -- nvmf/common.sh@731 -- # python - 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GiSmDL5Z0W 00:38:32.867 17:03:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GiSmDL5Z0W 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.GiSmDL5Z0W 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=2998337 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2998337 00:38:32.867 17:03:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2998337 ']' 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:32.867 17:03:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:32.867 [2024-10-01 17:03:24.433212] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:38:32.867 [2024-10-01 17:03:24.433283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998337 ] 00:38:32.867 [2024-10-01 17:03:24.513778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.128 [2024-10-01 17:03:24.607131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.700 17:03:25 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:33.700 17:03:25 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:33.700 17:03:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:33.700 17:03:25 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.700 17:03:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:33.700 [2024-10-01 17:03:25.331395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.700 null0 00:38:33.700 [2024-10-01 17:03:25.363444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:33.700 [2024-10-01 17:03:25.363892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.961 17:03:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:33.961 [2024-10-01 17:03:25.395508] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:33.961 request: 00:38:33.961 { 00:38:33.961 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.961 "secure_channel": false, 00:38:33.961 "listen_address": { 00:38:33.961 "trtype": "tcp", 00:38:33.961 "traddr": "127.0.0.1", 00:38:33.961 "trsvcid": "4420" 00:38:33.961 }, 00:38:33.961 "method": "nvmf_subsystem_add_listener", 00:38:33.961 "req_id": 1 00:38:33.961 } 00:38:33.961 Got JSON-RPC error response 00:38:33.961 response: 00:38:33.961 { 00:38:33.961 "code": -32602, 00:38:33.961 "message": "Invalid parameters" 00:38:33.961 } 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:33.961 17:03:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=2998522 00:38:33.961 17:03:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2998522 /var/tmp/bperf.sock 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2998522 ']' 00:38:33.961 17:03:25 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:33.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.961 17:03:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:33.961 [2024-10-01 17:03:25.458683] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:38:33.961 [2024-10-01 17:03:25.458744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998522 ] 00:38:33.961 [2024-10-01 17:03:25.513811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.961 [2024-10-01 17:03:25.579849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.222 17:03:25 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:34.222 17:03:25 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:34.222 17:03:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:34.222 17:03:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:34.222 17:03:25 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GiSmDL5Z0W 00:38:34.222 17:03:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GiSmDL5Z0W 00:38:34.483 17:03:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:34.483 17:03:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:34.483 17:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.483 17:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:34.483 17:03:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:34.744 17:03:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1Gi49Ao8A1 == \/\t\m\p\/\t\m\p\.\1\G\i\4\9\A\o\8\A\1 ]] 00:38:34.744 17:03:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:34.744 17:03:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:34.744 17:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:34.744 17:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:34.744 17:03:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.005 17:03:26 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.GiSmDL5Z0W == \/\t\m\p\/\t\m\p\.\G\i\S\m\D\L\5\Z\0\W ]] 00:38:35.005 17:03:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:35.006 17:03:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.006 17:03:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.006 17:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.006 17:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.006 17:03:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.266 17:03:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:35.267 17:03:26 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:35.267 17:03:26 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:35.267 17:03:26 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:35.267 17:03:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:35.527 [2024-10-01 17:03:27.122638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:35.527 nvme0n1 00:38:35.787 17:03:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.787 17:03:27 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:35.787 17:03:27 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.787 17:03:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:36.047 17:03:27 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:36.047 17:03:27 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:36.308 Running I/O for 1 seconds... 00:38:37.250 14475.00 IOPS, 56.54 MiB/s 00:38:37.250 Latency(us) 00:38:37.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.250 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:37.250 nvme0n1 : 1.01 14529.55 56.76 0.00 0.00 8787.78 3554.07 216167.98 00:38:37.250 =================================================================================================================== 00:38:37.250 Total : 14529.55 56.76 0.00 0.00 8787.78 3554.07 216167.98 00:38:37.250 { 00:38:37.250 "results": [ 00:38:37.250 { 00:38:37.250 "job": "nvme0n1", 00:38:37.250 "core_mask": "0x2", 00:38:37.250 "workload": "randrw", 00:38:37.250 "percentage": 50, 00:38:37.250 "status": "finished", 00:38:37.250 "queue_depth": 128, 00:38:37.250 "io_size": 4096, 00:38:37.250 "runtime": 1.005124, 00:38:37.250 "iops": 14529.550582813663, 00:38:37.250 "mibps": 56.75605696411587, 00:38:37.250 "io_failed": 0, 00:38:37.250 "io_timeout": 0, 00:38:37.250 "avg_latency_us": 8787.777581695214, 00:38:37.250 "min_latency_us": 3554.067692307692, 00:38:37.250 "max_latency_us": 216167.9753846154 00:38:37.250 } 00:38:37.250 ], 00:38:37.250 "core_count": 1 00:38:37.250 } 00:38:37.250 17:03:28 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:37.250 17:03:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:37.511 17:03:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:37.511 17:03:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.511 17:03:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.511 17:03:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.511 17:03:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.511 17:03:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.773 17:03:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:37.773 17:03:29 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.773 17:03:29 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:37.773 17:03:29 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:37.773 17:03:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:37.773 17:03:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:38.034 [2024-10-01 17:03:29.645524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:38.034 [2024-10-01 17:03:29.646287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ddaa0 (107): Transport endpoint is not connected 00:38:38.034 [2024-10-01 17:03:29.647283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ddaa0 (9): Bad file descriptor 00:38:38.034 [2024-10-01 17:03:29.648285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:38.034 [2024-10-01 17:03:29.648292] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:38.034 [2024-10-01 17:03:29.648298] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:38.034 [2024-10-01 17:03:29.648305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:38.034 request: 00:38:38.034 { 00:38:38.034 "name": "nvme0", 00:38:38.034 "trtype": "tcp", 00:38:38.034 "traddr": "127.0.0.1", 00:38:38.034 "adrfam": "ipv4", 00:38:38.034 "trsvcid": "4420", 00:38:38.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:38.034 "prchk_reftag": false, 00:38:38.034 "prchk_guard": false, 00:38:38.034 "hdgst": false, 00:38:38.034 "ddgst": false, 00:38:38.034 "psk": "key1", 00:38:38.034 "allow_unrecognized_csi": false, 00:38:38.034 "method": "bdev_nvme_attach_controller", 00:38:38.034 "req_id": 1 00:38:38.034 } 00:38:38.034 Got JSON-RPC error response 00:38:38.034 response: 00:38:38.034 { 00:38:38.034 "code": -5, 00:38:38.034 "message": "Input/output error" 00:38:38.034 } 00:38:38.034 17:03:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:38.034 17:03:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:38.034 17:03:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:38.034 17:03:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:38.034 17:03:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:38.034 17:03:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:38.034 17:03:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.034 17:03:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.034 17:03:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.034 17:03:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.295 17:03:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:38.295 17:03:29 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:38.295 17:03:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.295 17:03:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.295 17:03:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.295 17:03:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.295 17:03:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.556 17:03:30 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:38.556 17:03:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:38.556 17:03:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:38.817 17:03:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:38.817 17:03:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:39.078 17:03:30 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:39.078 17:03:30 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:39.078 17:03:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.078 17:03:30 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:39.078 17:03:30 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.1Gi49Ao8A1 00:38:39.078 17:03:30 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.078 17:03:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.078 17:03:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.338 [2024-10-01 17:03:30.955725] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1Gi49Ao8A1': 0100660 00:38:39.338 [2024-10-01 17:03:30.955749] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:39.338 request: 00:38:39.338 { 00:38:39.338 "name": "key0", 00:38:39.338 "path": "/tmp/tmp.1Gi49Ao8A1", 00:38:39.338 "method": "keyring_file_add_key", 00:38:39.338 "req_id": 1 00:38:39.338 } 00:38:39.338 Got JSON-RPC error response 00:38:39.338 response: 00:38:39.338 { 00:38:39.338 "code": -1, 00:38:39.338 "message": "Operation not permitted" 00:38:39.338 } 00:38:39.338 17:03:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:39.338 17:03:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:39.338 17:03:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:39.338 17:03:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:39.338 17:03:30 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.1Gi49Ao8A1 00:38:39.338 17:03:30 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.338 17:03:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1Gi49Ao8A1 00:38:39.599 17:03:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.1Gi49Ao8A1 00:38:39.599 17:03:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:39.599 17:03:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:39.599 17:03:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.599 17:03:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.599 17:03:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.599 17:03:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:39.860 17:03:31 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:39.860 17:03:31 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:39.860 17:03:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:39.860 17:03:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:40.120 [2024-10-01 17:03:31.601363] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1Gi49Ao8A1': No such file or directory 00:38:40.121 [2024-10-01 17:03:31.601378] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:40.121 [2024-10-01 17:03:31.601393] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:40.121 [2024-10-01 17:03:31.601399] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:40.121 [2024-10-01 17:03:31.601405] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:40.121 [2024-10-01 17:03:31.601410] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:40.121 request: 00:38:40.121 { 00:38:40.121 "name": "nvme0", 00:38:40.121 "trtype": "tcp", 00:38:40.121 "traddr": "127.0.0.1", 00:38:40.121 "adrfam": "ipv4", 00:38:40.121 "trsvcid": "4420", 00:38:40.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.121 "prchk_reftag": false, 00:38:40.121 "prchk_guard": false, 00:38:40.121 "hdgst": false, 00:38:40.121 "ddgst": false, 00:38:40.121 "psk": "key0", 00:38:40.121 "allow_unrecognized_csi": false, 00:38:40.121 "method": "bdev_nvme_attach_controller", 00:38:40.121 "req_id": 1 00:38:40.121 } 00:38:40.121 Got JSON-RPC error response 00:38:40.121 response: 00:38:40.121 { 00:38:40.121 "code": -19, 00:38:40.121 "message": "No such device" 00:38:40.121 } 00:38:40.121 17:03:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:40.121 17:03:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:40.121 17:03:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:40.121 17:03:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:40.121 17:03:31 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:40.121 17:03:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:40.381 17:03:31 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jFHDBBacCI 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:38:40.381 17:03:31 keyring_file -- nvmf/common.sh@731 -- # python - 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jFHDBBacCI 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jFHDBBacCI 00:38:40.381 17:03:31 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.jFHDBBacCI 00:38:40.381 17:03:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jFHDBBacCI 00:38:40.381 17:03:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jFHDBBacCI 00:38:40.641 17:03:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:40.641 17:03:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:40.901 nvme0n1 00:38:40.901 17:03:32 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:40.901 17:03:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.901 17:03:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.901 17:03:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.901 17:03:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.901 17:03:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.160 17:03:32 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:41.160 17:03:32 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:41.160 17:03:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:41.160 17:03:32 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:41.160 17:03:32 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:41.160 17:03:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:41.160 17:03:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.160 17:03:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.421 17:03:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:41.421 17:03:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:41.421 17:03:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:41.421 17:03:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.421 17:03:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.421 17:03:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:41.421 17:03:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.716 17:03:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:41.716 17:03:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:41.716 17:03:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:41.977 17:03:33 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:41.977 17:03:33 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:41.977 17:03:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.977 17:03:33 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:41.977 17:03:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jFHDBBacCI 00:38:41.977 17:03:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jFHDBBacCI 00:38:42.237 17:03:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.GiSmDL5Z0W 00:38:42.237 17:03:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.GiSmDL5Z0W 00:38:42.497 17:03:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.497 17:03:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.758 nvme0n1 00:38:42.758 17:03:34 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:42.758 17:03:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:43.017 17:03:34 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:43.017 "subsystems": [ 00:38:43.017 { 00:38:43.017 "subsystem": "keyring", 00:38:43.017 "config": [ 00:38:43.017 { 00:38:43.017 "method": "keyring_file_add_key", 00:38:43.017 "params": { 00:38:43.017 "name": "key0", 00:38:43.017 "path": "/tmp/tmp.jFHDBBacCI" 00:38:43.017 } 00:38:43.017 }, 00:38:43.017 { 00:38:43.017 "method": "keyring_file_add_key", 00:38:43.017 "params": { 00:38:43.017 "name": "key1", 00:38:43.017 "path": "/tmp/tmp.GiSmDL5Z0W" 00:38:43.017 } 00:38:43.017 } 00:38:43.017 ] 00:38:43.017 }, 00:38:43.018 { 00:38:43.018 "subsystem": "iobuf", 00:38:43.018 "config": [ 00:38:43.018 { 00:38:43.018 "method": "iobuf_set_options", 00:38:43.018 "params": { 00:38:43.018 "small_pool_count": 8192, 00:38:43.018 "large_pool_count": 1024, 00:38:43.018 "small_bufsize": 8192, 00:38:43.018 "large_bufsize": 135168 00:38:43.018 } 00:38:43.018 } 00:38:43.018 ] 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "subsystem": "sock", 00:38:43.018 "config": [ 00:38:43.018 { 00:38:43.018 "method": "sock_set_default_impl", 00:38:43.018 "params": { 00:38:43.018 "impl_name": "posix" 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "sock_impl_set_options", 00:38:43.018 "params": { 00:38:43.018 "impl_name": "ssl", 00:38:43.018 "recv_buf_size": 4096, 00:38:43.018 "send_buf_size": 4096, 00:38:43.018 "enable_recv_pipe": true, 00:38:43.018 "enable_quickack": false, 00:38:43.018 "enable_placement_id": 0, 00:38:43.018 "enable_zerocopy_send_server": true, 00:38:43.018 "enable_zerocopy_send_client": false, 00:38:43.018 "zerocopy_threshold": 0, 00:38:43.018 "tls_version": 0, 00:38:43.018 "enable_ktls": false 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "sock_impl_set_options", 00:38:43.018 "params": { 00:38:43.018 "impl_name": "posix", 00:38:43.018 "recv_buf_size": 2097152, 00:38:43.018 "send_buf_size": 2097152, 00:38:43.018 "enable_recv_pipe": true, 00:38:43.018 "enable_quickack": false, 00:38:43.018 "enable_placement_id": 0, 00:38:43.018 "enable_zerocopy_send_server": true, 00:38:43.018 "enable_zerocopy_send_client": false, 00:38:43.018 "zerocopy_threshold": 0, 00:38:43.018 "tls_version": 0, 00:38:43.018 "enable_ktls": false 00:38:43.018 } 00:38:43.018 } 00:38:43.018 ] 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "subsystem": "vmd", 00:38:43.018 "config": [] 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "subsystem": "accel", 00:38:43.018 "config": [ 00:38:43.018 { 00:38:43.018 "method": "accel_set_options", 00:38:43.018 "params": { 00:38:43.018 "small_cache_size": 128, 00:38:43.018 "large_cache_size": 16, 00:38:43.018 "task_count": 2048, 00:38:43.018 "sequence_count": 2048, 00:38:43.018 "buf_count": 2048 00:38:43.018 } 00:38:43.018 } 00:38:43.018 ] 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "subsystem": "bdev", 00:38:43.018 "config": [ 00:38:43.018 { 00:38:43.018 "method": "bdev_set_options", 00:38:43.018 "params": { 00:38:43.018 "bdev_io_pool_size": 65535, 00:38:43.018 "bdev_io_cache_size": 256, 00:38:43.018 "bdev_auto_examine": true, 00:38:43.018 "iobuf_small_cache_size": 128, 00:38:43.018 "iobuf_large_cache_size": 16 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_raid_set_options", 00:38:43.018 "params": { 00:38:43.018 "process_window_size_kb": 1024, 00:38:43.018 "process_max_bandwidth_mb_sec": 0 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_iscsi_set_options", 00:38:43.018 "params": { 00:38:43.018 "timeout_sec": 30 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_nvme_set_options", 00:38:43.018 "params": { 00:38:43.018 "action_on_timeout": "none", 00:38:43.018 "timeout_us": 0, 00:38:43.018 "timeout_admin_us": 0, 00:38:43.018 "keep_alive_timeout_ms": 10000, 00:38:43.018 "arbitration_burst": 0, 00:38:43.018 "low_priority_weight": 0, 00:38:43.018 "medium_priority_weight": 0, 00:38:43.018 "high_priority_weight": 0, 00:38:43.018 "nvme_adminq_poll_period_us": 10000, 00:38:43.018 "nvme_ioq_poll_period_us": 0, 00:38:43.018 "io_queue_requests": 512, 00:38:43.018 "delay_cmd_submit": true, 00:38:43.018 "transport_retry_count": 4, 00:38:43.018 "bdev_retry_count": 3, 00:38:43.018 "transport_ack_timeout": 0, 00:38:43.018 "ctrlr_loss_timeout_sec": 0, 00:38:43.018 "reconnect_delay_sec": 0, 00:38:43.018 "fast_io_fail_timeout_sec": 0, 00:38:43.018 "disable_auto_failback": false, 00:38:43.018 "generate_uuids": false, 00:38:43.018 "transport_tos": 0, 00:38:43.018 "nvme_error_stat": false, 00:38:43.018 "rdma_srq_size": 0, 00:38:43.018 "io_path_stat": false, 00:38:43.018 "allow_accel_sequence": false, 00:38:43.018 "rdma_max_cq_size": 0, 00:38:43.018 "rdma_cm_event_timeout_ms": 0, 00:38:43.018 "dhchap_digests": [ 00:38:43.018 "sha256", 00:38:43.018 "sha384", 00:38:43.018 "sha512" 00:38:43.018 ], 00:38:43.018 "dhchap_dhgroups": [ 00:38:43.018 "null", 00:38:43.018 "ffdhe2048", 00:38:43.018 "ffdhe3072", 00:38:43.018 "ffdhe4096", 00:38:43.018 "ffdhe6144", 00:38:43.018 "ffdhe8192" 00:38:43.018 ] 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_nvme_attach_controller", 00:38:43.018 "params": { 00:38:43.018 "name": "nvme0", 00:38:43.018 "trtype": "TCP", 00:38:43.018 "adrfam": "IPv4", 00:38:43.018 "traddr": "127.0.0.1", 00:38:43.018 "trsvcid": "4420", 00:38:43.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.018 "prchk_reftag": false, 00:38:43.018 "prchk_guard": false, 00:38:43.018 "ctrlr_loss_timeout_sec": 0, 00:38:43.018 "reconnect_delay_sec": 0, 00:38:43.018 "fast_io_fail_timeout_sec": 0, 00:38:43.018 "psk": "key0", 00:38:43.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.018 "hdgst": false, 00:38:43.018 "ddgst": false 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_nvme_set_hotplug", 00:38:43.018 "params": { 00:38:43.018 "period_us": 100000, 00:38:43.018 "enable": false 00:38:43.018 } 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "method": "bdev_wait_for_examine" 00:38:43.018 } 00:38:43.018 ] 00:38:43.018 }, 00:38:43.018 { 00:38:43.018 "subsystem": "nbd", 00:38:43.018 "config": [] 00:38:43.018 } 00:38:43.018 ] 00:38:43.018 }' 00:38:43.018 17:03:34 keyring_file -- keyring/file.sh@115 -- # killprocess 2998522 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2998522 ']' 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2998522 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2998522 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2998522' 00:38:43.018 killing process with pid 2998522 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@969 -- # kill 2998522 00:38:43.018 Received shutdown signal, test time was about 1.000000 seconds 00:38:43.018 00:38:43.018 Latency(us) 00:38:43.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.018 =================================================================================================================== 00:38:43.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:43.018 17:03:34 keyring_file -- common/autotest_common.sh@974 -- # wait 2998522 00:38:43.278 17:03:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=3000177 00:38:43.278 17:03:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3000177 /var/tmp/bperf.sock 00:38:43.278 17:03:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3000177 ']' 00:38:43.278 17:03:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:43.278 17:03:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:43.278 17:03:34 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:43.278 17:03:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:43.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:43.278 17:03:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:43.278 17:03:34 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:43.278 "subsystems": [ 00:38:43.278 { 00:38:43.278 "subsystem": "keyring", 00:38:43.278 "config": [ 00:38:43.278 { 00:38:43.278 "method": "keyring_file_add_key", 00:38:43.278 "params": { 00:38:43.278 "name": "key0", 00:38:43.278 "path": "/tmp/tmp.jFHDBBacCI" 00:38:43.278 } 00:38:43.278 }, 00:38:43.278 { 00:38:43.278 "method": "keyring_file_add_key", 00:38:43.278 "params": { 00:38:43.278 "name": "key1", 00:38:43.278 "path": "/tmp/tmp.GiSmDL5Z0W" 00:38:43.278 } 00:38:43.278 } 00:38:43.278 ] 00:38:43.278 }, 00:38:43.278 { 00:38:43.278 "subsystem": "iobuf", 00:38:43.278 "config": [ 00:38:43.278 { 00:38:43.278 "method": "iobuf_set_options", 00:38:43.278 "params": { 00:38:43.278 "small_pool_count": 8192, 00:38:43.278 "large_pool_count": 1024, 00:38:43.278 "small_bufsize": 8192, 00:38:43.278 "large_bufsize": 135168 00:38:43.278 } 00:38:43.278 } 00:38:43.278 ] 00:38:43.278 }, 00:38:43.278 { 00:38:43.278 "subsystem": "sock", 00:38:43.278 "config": [ 00:38:43.278 { 00:38:43.278 "method": "sock_set_default_impl", 00:38:43.278 "params": { 00:38:43.278 "impl_name": "posix" 00:38:43.278 } 00:38:43.278 }, 00:38:43.278 { 00:38:43.278 "method": "sock_impl_set_options", 00:38:43.278 "params": { 00:38:43.278 "impl_name": "ssl", 00:38:43.278 "recv_buf_size": 4096, 00:38:43.278 "send_buf_size": 4096, 00:38:43.278 "enable_recv_pipe": true, 00:38:43.278 "enable_quickack": false, 00:38:43.278 "enable_placement_id": 0, 00:38:43.278 "enable_zerocopy_send_server": true, 00:38:43.278 "enable_zerocopy_send_client": false, 00:38:43.278 "zerocopy_threshold": 0, 00:38:43.278 "tls_version": 0, 00:38:43.278 "enable_ktls": false 00:38:43.278 } 00:38:43.278 }, 00:38:43.278 { 00:38:43.278 "method": "sock_impl_set_options", 00:38:43.278 "params": { 00:38:43.278 "impl_name": "posix", 00:38:43.278 "recv_buf_size": 2097152, 00:38:43.279 "send_buf_size": 2097152, 00:38:43.279 "enable_recv_pipe": true, 00:38:43.279 "enable_quickack": false, 00:38:43.279 "enable_placement_id": 0, 00:38:43.279 "enable_zerocopy_send_server": true, 00:38:43.279 "enable_zerocopy_send_client": false, 00:38:43.279 "zerocopy_threshold": 0, 00:38:43.279 "tls_version": 0, 00:38:43.279 "enable_ktls": false 00:38:43.279 } 00:38:43.279 } 00:38:43.279 ] 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "subsystem": "vmd", 00:38:43.279 "config": [] 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "subsystem": "accel", 00:38:43.279 "config": [ 00:38:43.279 { 00:38:43.279 "method": "accel_set_options", 00:38:43.279 "params": { 00:38:43.279 "small_cache_size": 128, 00:38:43.279 "large_cache_size": 16, 00:38:43.279 "task_count": 2048, 00:38:43.279 "sequence_count": 2048, 00:38:43.279 "buf_count": 2048 00:38:43.279 } 00:38:43.279 } 00:38:43.279 ] 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "subsystem": "bdev", 00:38:43.279 "config": [ 00:38:43.279 { 00:38:43.279 "method": "bdev_set_options", 00:38:43.279 "params": { 00:38:43.279 "bdev_io_pool_size": 65535, 00:38:43.279 "bdev_io_cache_size": 256, 00:38:43.279 "bdev_auto_examine": true, 00:38:43.279 "iobuf_small_cache_size": 128, 00:38:43.279 "iobuf_large_cache_size": 16 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_raid_set_options", 00:38:43.279 "params": { 00:38:43.279 "process_window_size_kb": 1024, 00:38:43.279 "process_max_bandwidth_mb_sec": 0 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_iscsi_set_options", 00:38:43.279 "params": { 00:38:43.279 "timeout_sec": 30 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_nvme_set_options", 00:38:43.279 "params": { 00:38:43.279 "action_on_timeout": "none", 00:38:43.279 "timeout_us": 0, 00:38:43.279 "timeout_admin_us": 0, 00:38:43.279 "keep_alive_timeout_ms": 10000, 00:38:43.279 "arbitration_burst": 0, 00:38:43.279 "low_priority_weight": 0, 00:38:43.279 "medium_priority_weight": 0, 00:38:43.279 "high_priority_weight": 0, 00:38:43.279 "nvme_adminq_poll_period_us": 10000, 00:38:43.279 "nvme_ioq_poll_period_us": 0, 00:38:43.279 "io_queue_requests": 512, 00:38:43.279 "delay_cmd_submit": true, 00:38:43.279 "transport_retry_count": 4, 00:38:43.279 "bdev_retry_count": 3, 00:38:43.279 "transport_ack_timeout": 0, 00:38:43.279 "ctrlr_loss_timeout_sec": 0, 00:38:43.279 "reconnect_delay_sec": 0, 00:38:43.279 "fast_io_fail_timeout_sec": 0, 00:38:43.279 "disable_auto_failback": false, 00:38:43.279 "generate_uuids": false, 00:38:43.279 "transport_tos": 0, 00:38:43.279 "nvme_error_stat": false, 00:38:43.279 "rdma_srq_size": 0, 00:38:43.279 "io_path_stat": false, 00:38:43.279 17:03:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:43.279 "allow_accel_sequence": false, 00:38:43.279 "rdma_max_cq_size": 0, 00:38:43.279 "rdma_cm_event_timeout_ms": 0, 00:38:43.279 "dhchap_digests": [ 00:38:43.279 "sha256", 00:38:43.279 "sha384", 00:38:43.279 "sha512" 00:38:43.279 ], 00:38:43.279 "dhchap_dhgroups": [ 00:38:43.279 "null", 00:38:43.279 "ffdhe2048", 00:38:43.279 "ffdhe3072", 00:38:43.279 "ffdhe4096", 00:38:43.279 "ffdhe6144", 00:38:43.279 "ffdhe8192" 00:38:43.279 ] 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_nvme_attach_controller", 00:38:43.279 "params": { 00:38:43.279 "name": "nvme0", 00:38:43.279 "trtype": "TCP", 00:38:43.279 "adrfam": "IPv4", 00:38:43.279 "traddr": "127.0.0.1", 00:38:43.279 "trsvcid": "4420", 00:38:43.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.279 "prchk_reftag": false, 00:38:43.279 "prchk_guard": false, 00:38:43.279 "ctrlr_loss_timeout_sec": 0, 00:38:43.279 "reconnect_delay_sec": 0, 00:38:43.279 "fast_io_fail_timeout_sec": 0, 00:38:43.279 "psk": "key0", 00:38:43.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.279 "hdgst": false, 00:38:43.279 "ddgst": false 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_nvme_set_hotplug", 00:38:43.279 "params": { 00:38:43.279 "period_us": 100000, 00:38:43.279 "enable": false 00:38:43.279 } 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "method": "bdev_wait_for_examine" 00:38:43.279 } 00:38:43.279 ] 00:38:43.279 }, 00:38:43.279 { 00:38:43.279 "subsystem": "nbd", 00:38:43.279 "config": [] 00:38:43.279 } 00:38:43.279 ] 00:38:43.279 }' 00:38:43.279 [2024-10-01 17:03:34.846997] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:38:43.279 [2024-10-01 17:03:34.847050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000177 ] 00:38:43.279 [2024-10-01 17:03:34.897415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.279 [2024-10-01 17:03:34.951644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:43.545 [2024-10-01 17:03:35.093770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:44.152 17:03:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:44.152 17:03:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:44.152 17:03:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:44.152 17:03:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:44.152 17:03:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.436 17:03:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:44.436 17:03:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:44.436 17:03:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:44.436 17:03:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:44.436 17:03:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.436 17:03:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.436 17:03:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.725 17:03:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:44.725 17:03:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.725 17:03:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:44.725 17:03:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:44.725 17:03:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:44.725 17:03:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:45.027 17:03:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:45.027 17:03:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:45.027 17:03:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jFHDBBacCI /tmp/tmp.GiSmDL5Z0W 00:38:45.027 17:03:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3000177 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3000177 ']' 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3000177 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000177 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000177' 00:38:45.027 killing process with pid 3000177 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@969 -- # kill 3000177 00:38:45.027 Received shutdown signal, test time was about 1.000000 seconds 00:38:45.027 00:38:45.027 Latency(us) 00:38:45.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.027 =================================================================================================================== 00:38:45.027 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:45.027 17:03:36 keyring_file -- common/autotest_common.sh@974 -- # wait 3000177 00:38:45.307 17:03:36 keyring_file -- keyring/file.sh@21 -- # killprocess 2998337 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2998337 ']' 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2998337 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2998337 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2998337' 00:38:45.307 killing process with pid 2998337 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@969 -- # kill 2998337 00:38:45.307 17:03:36 keyring_file -- common/autotest_common.sh@974 -- # wait 2998337 00:38:45.567 00:38:45.567 real 0m13.006s 00:38:45.567 user 0m32.345s 00:38:45.567 sys 0m2.703s 00:38:45.567 17:03:37 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:45.567 17:03:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:45.567 ************************************ 00:38:45.567 END TEST keyring_file 00:38:45.567 ************************************ 00:38:45.567 17:03:37 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:45.567 17:03:37 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:45.567 17:03:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:45.567 17:03:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:45.567 17:03:37 -- common/autotest_common.sh@10 -- # set +x 00:38:45.567 ************************************ 00:38:45.567 START TEST keyring_linux 00:38:45.567 ************************************ 00:38:45.567 17:03:37 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:45.567 Joined session keyring: 615425965 00:38:45.567 * Looking for test storage... 00:38:45.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:45.567 17:03:37 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:45.567 17:03:37 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:38:45.567 17:03:37 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.828 17:03:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.828 --rc genhtml_branch_coverage=1 00:38:45.828 --rc genhtml_function_coverage=1 00:38:45.828 --rc genhtml_legend=1 00:38:45.828 --rc geninfo_all_blocks=1 00:38:45.828 --rc geninfo_unexecuted_blocks=1 00:38:45.828 00:38:45.828 ' 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.828 --rc genhtml_branch_coverage=1 00:38:45.828 --rc genhtml_function_coverage=1 00:38:45.828 --rc genhtml_legend=1 00:38:45.828 --rc geninfo_all_blocks=1 00:38:45.828 --rc geninfo_unexecuted_blocks=1 00:38:45.828 00:38:45.828 ' 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.828 --rc genhtml_branch_coverage=1 00:38:45.828 --rc genhtml_function_coverage=1 00:38:45.828 --rc genhtml_legend=1 00:38:45.828 --rc geninfo_all_blocks=1 00:38:45.828 --rc geninfo_unexecuted_blocks=1 00:38:45.828 00:38:45.828 ' 00:38:45.828 17:03:37 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:45.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.828 --rc genhtml_branch_coverage=1 00:38:45.828 --rc genhtml_function_coverage=1 00:38:45.828 --rc genhtml_legend=1 00:38:45.828 --rc geninfo_all_blocks=1 00:38:45.828 --rc geninfo_unexecuted_blocks=1 00:38:45.828 00:38:45.828 ' 00:38:45.828 17:03:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:45.828 17:03:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.828 17:03:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.829 17:03:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.829 17:03:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.829 17:03:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.829 17:03:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.829 17:03:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.829 17:03:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.829 17:03:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.829 17:03:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:45.829 17:03:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:45.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:45.829 /tmp/:spdk-test:key0 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:45.829 17:03:37 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:45.829 17:03:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:45.829 /tmp/:spdk-test:key1 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3000616 00:38:45.829 17:03:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3000616 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3000616 ']' 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:45.829 17:03:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:46.089 [2024-10-01 17:03:37.558648] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:38:46.089 [2024-10-01 17:03:37.558728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000616 ] 00:38:46.089 [2024-10-01 17:03:37.640248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.089 [2024-10-01 17:03:37.710156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.028 [2024-10-01 17:03:38.411126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.028 null0 00:38:47.028 [2024-10-01 17:03:38.443176] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:47.028 [2024-10-01 17:03:38.443552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:47.028 934008661 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:47.028 367261368 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3000902 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3000902 /var/tmp/bperf.sock 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3000902 ']' 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:47.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.028 [2024-10-01 17:03:38.531716] Starting SPDK v25.01-pre git sha1 1c027d356 / DPDK 24.03.0 initialization... 00:38:47.028 [2024-10-01 17:03:38.531764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000902 ] 00:38:47.028 [2024-10-01 17:03:38.581399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.028 [2024-10-01 17:03:38.636099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.028 17:03:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:47.028 17:03:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:47.028 17:03:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:47.288 17:03:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:47.288 17:03:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:47.547 17:03:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:47.547 17:03:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:47.807 [2024-10-01 17:03:39.358326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:47.807 nvme0n1 00:38:47.807 17:03:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:47.807 17:03:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:47.807 17:03:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:47.807 17:03:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:47.807 17:03:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:47.807 17:03:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.067 17:03:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:48.067 17:03:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:48.067 17:03:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:48.067 17:03:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:48.067 17:03:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.067 17:03:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:48.067 17:03:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@25 -- # sn=934008661 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 934008661 == \9\3\4\0\0\8\6\6\1 ]] 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 934008661 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:48.327 17:03:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:48.327 Running I/O for 1 seconds... 00:38:49.708 5805.00 IOPS, 22.68 MiB/s 00:38:49.708 Latency(us) 00:38:49.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.708 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:49.709 nvme0n1 : 1.06 5600.81 21.88 0.00 0.00 22676.45 3806.13 85499.27 00:38:49.709 =================================================================================================================== 00:38:49.709 Total : 5600.81 21.88 0.00 0.00 22676.45 3806.13 85499.27 00:38:49.709 { 00:38:49.709 "results": [ 00:38:49.709 { 00:38:49.709 "job": "nvme0n1", 00:38:49.709 "core_mask": "0x2", 00:38:49.709 "workload": "randread", 00:38:49.709 "status": "finished", 00:38:49.709 "queue_depth": 128, 00:38:49.709 "io_size": 4096, 00:38:49.709 "runtime": 1.059312, 00:38:49.709 "iops": 5600.805050825442, 00:38:49.709 "mibps": 21.878144729786882, 00:38:49.709 "io_failed": 0, 00:38:49.709 "io_timeout": 0, 00:38:49.709 "avg_latency_us": 22676.44674247041, 00:38:49.709 "min_latency_us": 3806.1292307692306, 00:38:49.709 "max_latency_us": 85499.27384615384 00:38:49.709 } 00:38:49.709 ], 00:38:49.709 "core_count": 1 00:38:49.709 } 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:49.709 17:03:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:49.709 17:03:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:49.709 17:03:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.969 17:03:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:49.969 17:03:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:49.969 17:03:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:49.969 17:03:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:49.969 17:03:41 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:49.969 17:03:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:50.229 [2024-10-01 17:03:41.702117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:50.229 [2024-10-01 17:03:41.702868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88110 (107): Transport endpoint is not connected 00:38:50.229 [2024-10-01 17:03:41.703864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b88110 (9): Bad file descriptor 00:38:50.229 [2024-10-01 17:03:41.704865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:50.229 [2024-10-01 17:03:41.704873] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:50.229 [2024-10-01 17:03:41.704879] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:50.229 [2024-10-01 17:03:41.704886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:50.229 request: 00:38:50.229 { 00:38:50.229 "name": "nvme0", 00:38:50.229 "trtype": "tcp", 00:38:50.229 "traddr": "127.0.0.1", 00:38:50.229 "adrfam": "ipv4", 00:38:50.229 "trsvcid": "4420", 00:38:50.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:50.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:50.229 "prchk_reftag": false, 00:38:50.229 "prchk_guard": false, 00:38:50.229 "hdgst": false, 00:38:50.229 "ddgst": false, 00:38:50.229 "psk": ":spdk-test:key1", 00:38:50.229 "allow_unrecognized_csi": false, 00:38:50.229 "method": "bdev_nvme_attach_controller", 00:38:50.229 "req_id": 1 00:38:50.229 } 00:38:50.229 Got JSON-RPC error response 00:38:50.229 response: 00:38:50.229 { 00:38:50.229 "code": -5, 00:38:50.229 "message": "Input/output error" 00:38:50.229 } 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@33 -- # sn=934008661 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 934008661 00:38:50.229 1 links removed 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@33 -- # sn=367261368 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 367261368 00:38:50.229 1 links removed 00:38:50.229 17:03:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3000902 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3000902 ']' 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3000902 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000902 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000902' 00:38:50.229 killing process with pid 3000902 00:38:50.229 17:03:41 keyring_linux -- common/autotest_common.sh@969 -- # kill 3000902 00:38:50.229 Received shutdown signal, test time was about 1.000000 seconds 00:38:50.229 00:38:50.229 Latency(us) 00:38:50.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.229 =================================================================================================================== 00:38:50.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:50.230 17:03:41 keyring_linux -- common/autotest_common.sh@974 -- # wait 3000902 00:38:50.490 17:03:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3000616 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3000616 ']' 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3000616 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000616 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000616' 00:38:50.490 killing process with pid 3000616 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@969 -- # kill 3000616 00:38:50.490 17:03:41 keyring_linux -- common/autotest_common.sh@974 -- # wait 3000616 00:38:50.750 00:38:50.750 real 0m5.078s 00:38:50.750 user 0m9.997s 00:38:50.750 sys 0m1.128s 00:38:50.750 17:03:42 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:50.750 17:03:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:50.750 ************************************ 00:38:50.750 END TEST keyring_linux 00:38:50.750 ************************************ 00:38:50.750 17:03:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:50.750 17:03:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:50.750 17:03:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:50.750 17:03:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:50.750 17:03:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:50.750 17:03:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:50.750 17:03:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:50.750 17:03:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:50.750 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:38:50.750 17:03:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:50.750 17:03:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:50.750 17:03:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:50.750 17:03:42 -- common/autotest_common.sh@10 -- # set +x 00:38:57.332 INFO: APP EXITING 00:38:57.332 INFO: killing all VMs 00:38:57.332 INFO: killing vhost app 00:38:57.332 INFO: EXIT DONE 00:39:00.628 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:39:00.628 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:00.628 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:00.888 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:00.888 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:00.888 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:00.888 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:05.090 Cleaning 00:39:05.090 Removing: /var/run/dpdk/spdk0/config 00:39:05.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:05.090 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:05.091 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:05.091 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:05.091 Removing: /var/run/dpdk/spdk1/config 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:05.091 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:05.091 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:05.091 Removing: /var/run/dpdk/spdk2/config 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:05.091 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:05.091 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:05.091 Removing: /var/run/dpdk/spdk3/config 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:05.091 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:05.091 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:05.091 Removing: /var/run/dpdk/spdk4/config 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:05.091 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:05.091 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:05.091 Removing: /dev/shm/bdev_svc_trace.1 00:39:05.091 Removing: /dev/shm/nvmf_trace.0 00:39:05.091 Removing: /dev/shm/spdk_tgt_trace.pid2474734 00:39:05.091 Removing: /var/run/dpdk/spdk0 00:39:05.091 Removing: /var/run/dpdk/spdk1 00:39:05.091 Removing: /var/run/dpdk/spdk2 00:39:05.091 Removing: /var/run/dpdk/spdk3 00:39:05.091 Removing: /var/run/dpdk/spdk4 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2471318 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2472730 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2474734 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2475519 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2476464 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2476711 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2477701 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2477760 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2477936 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2479850 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2481199 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2481561 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2481928 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2482310 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2482680 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2482769 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2483040 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2483400 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2484207 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2487912 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2488097 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2488295 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2488301 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2488667 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2488948 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2489289 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2489478 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2489634 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2489906 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2489983 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2490047 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2490661 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2490745 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2491095 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2495441 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2500368 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2511003 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2511612 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2516366 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2516818 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2521637 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2528127 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2531069 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2542884 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2552861 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2554577 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2555550 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2574300 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2578800 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2630854 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2637201 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2643711 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2650506 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2650566 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2651337 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2652112 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2653009 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2653615 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2653617 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2653921 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2653959 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2654048 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2654871 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2655764 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2656672 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2657278 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2657286 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2657589 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2658773 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2659897 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2668812 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2702288 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2707195 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2709106 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2711386 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2711413 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2711700 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2711727 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2712322 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2714160 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2715042 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2715533 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2717704 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2718338 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2719012 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2723570 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2729632 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2729633 00:39:05.091 Removing: /var/run/dpdk/spdk_pid2729634 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2733889 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2743867 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2748108 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2754778 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2756553 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2757985 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2759110 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2764315 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2768870 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2777665 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2777673 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2782974 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2783164 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2783446 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2783771 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2783853 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2789171 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2789679 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2794648 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2797392 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2803159 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2809974 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2819264 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2827244 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2827246 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2848933 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2849440 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2850035 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2850646 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2851387 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2852057 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2852668 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2853593 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2858293 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2858517 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2864978 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2865126 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2871148 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2875810 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2886320 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2886931 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2891507 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2891845 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2896671 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2903617 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2906230 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2917423 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2927061 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2928634 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2929554 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2947708 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2952221 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2955498 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2964120 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2964131 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2970137 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2972126 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2974114 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2975206 00:39:05.352 Removing: /var/run/dpdk/spdk_pid2977417 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2978563 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2988501 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2988989 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2989589 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2992304 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2992814 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2993422 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2998337 00:39:05.612 Removing: /var/run/dpdk/spdk_pid2998522 00:39:05.612 Removing: /var/run/dpdk/spdk_pid3000177 00:39:05.612 Removing: /var/run/dpdk/spdk_pid3000616 00:39:05.612 Removing: /var/run/dpdk/spdk_pid3000902 00:39:05.612 Clean 00:39:05.612 17:03:57 -- common/autotest_common.sh@1451 -- # return 0 00:39:05.612 17:03:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:05.612 17:03:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:05.612 17:03:57 -- common/autotest_common.sh@10 -- # set +x 00:39:05.612 17:03:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:05.612 17:03:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:05.612 17:03:57 -- common/autotest_common.sh@10 -- # set +x 00:39:05.612 17:03:57 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:05.612 17:03:57 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:05.612 17:03:57 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:05.612 17:03:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:05.612 17:03:57 -- spdk/autotest.sh@394 -- # hostname 00:39:05.612 17:03:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:05.872 geninfo: WARNING: invalid characters removed from testname! 00:39:32.449 17:04:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:33.017 17:04:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:35.553 17:04:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:37.460 17:04:29 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:40.000 17:04:31 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:41.911 17:04:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.453 17:04:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:44.453 17:04:35 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:39:44.453 17:04:35 -- common/autotest_common.sh@1681 -- $ lcov --version 00:39:44.453 17:04:35 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:39:44.453 17:04:35 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:39:44.453 17:04:35 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:39:44.453 17:04:35 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:39:44.453 17:04:35 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:39:44.453 17:04:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:39:44.453 17:04:35 -- scripts/common.sh@336 -- $ read -ra ver1 00:39:44.453 17:04:35 -- scripts/common.sh@337 -- $ IFS=.-: 00:39:44.453 17:04:35 -- scripts/common.sh@337 -- $ read -ra ver2 00:39:44.453 17:04:35 -- scripts/common.sh@338 -- $ local 'op=<' 00:39:44.453 17:04:35 -- scripts/common.sh@340 -- $ ver1_l=2 00:39:44.453 17:04:35 -- scripts/common.sh@341 -- $ ver2_l=1 00:39:44.453 17:04:35 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:39:44.453 17:04:35 -- scripts/common.sh@344 -- $ case "$op" in 00:39:44.453 17:04:35 -- scripts/common.sh@345 -- $ : 1 00:39:44.453 17:04:35 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:39:44.453 17:04:35 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.453 17:04:35 -- scripts/common.sh@365 -- $ decimal 1 00:39:44.453 17:04:35 -- scripts/common.sh@353 -- $ local d=1 00:39:44.453 17:04:35 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:39:44.453 17:04:35 -- scripts/common.sh@355 -- $ echo 1 00:39:44.453 17:04:35 -- scripts/common.sh@365 -- $ ver1[v]=1 00:39:44.453 17:04:35 -- scripts/common.sh@366 -- $ decimal 2 00:39:44.453 17:04:35 -- scripts/common.sh@353 -- $ local d=2 00:39:44.453 17:04:35 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:39:44.453 17:04:35 -- scripts/common.sh@355 -- $ echo 2 00:39:44.453 17:04:35 -- scripts/common.sh@366 -- $ ver2[v]=2 00:39:44.453 17:04:35 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:39:44.453 17:04:35 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:39:44.453 17:04:35 -- scripts/common.sh@368 -- $ return 0 00:39:44.453 17:04:35 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.453 17:04:35 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:39:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.453 --rc genhtml_branch_coverage=1 00:39:44.453 --rc genhtml_function_coverage=1 00:39:44.453 --rc genhtml_legend=1 00:39:44.453 --rc geninfo_all_blocks=1 00:39:44.453 --rc geninfo_unexecuted_blocks=1 00:39:44.453 00:39:44.453 ' 00:39:44.453 17:04:35 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:39:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.453 --rc genhtml_branch_coverage=1 00:39:44.453 --rc genhtml_function_coverage=1 00:39:44.453 --rc genhtml_legend=1 00:39:44.453 --rc geninfo_all_blocks=1 00:39:44.453 --rc geninfo_unexecuted_blocks=1 00:39:44.453 00:39:44.453 ' 00:39:44.453 17:04:35 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:39:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.453 --rc genhtml_branch_coverage=1 00:39:44.453 --rc genhtml_function_coverage=1 00:39:44.453 --rc genhtml_legend=1 00:39:44.453 --rc geninfo_all_blocks=1 00:39:44.453 --rc geninfo_unexecuted_blocks=1 00:39:44.453 00:39:44.453 ' 00:39:44.453 17:04:35 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:39:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.453 --rc genhtml_branch_coverage=1 00:39:44.453 --rc genhtml_function_coverage=1 00:39:44.453 --rc genhtml_legend=1 00:39:44.453 --rc geninfo_all_blocks=1 00:39:44.453 --rc geninfo_unexecuted_blocks=1 00:39:44.453 00:39:44.453 ' 00:39:44.453 17:04:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.453 17:04:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:39:44.453 17:04:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:44.453 17:04:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.453 17:04:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.453 17:04:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.454 17:04:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.454 17:04:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.454 17:04:35 -- paths/export.sh@5 -- $ export PATH 00:39:44.454 17:04:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.454 17:04:35 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:44.454 17:04:35 -- common/autobuild_common.sh@479 -- $ date +%s 00:39:44.454 17:04:35 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727795075.XXXXXX 00:39:44.454 17:04:35 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727795075.eZgKFy 00:39:44.454 17:04:35 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:39:44.454 17:04:35 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:39:44.454 17:04:35 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:44.454 17:04:35 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:44.454 17:04:35 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:44.454 17:04:35 -- common/autobuild_common.sh@495 -- $ get_config_params 00:39:44.454 17:04:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:39:44.454 17:04:35 -- common/autotest_common.sh@10 -- $ set +x 00:39:44.454 17:04:35 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:39:44.454 17:04:35 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:39:44.454 17:04:35 -- pm/common@17 -- $ local monitor 00:39:44.454 17:04:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:44.454 17:04:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:44.454 17:04:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:44.454 17:04:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:44.454 17:04:35 -- pm/common@25 -- $ sleep 1 00:39:44.454 17:04:35 -- pm/common@21 -- $ date +%s 00:39:44.454 17:04:35 -- pm/common@21 -- $ date +%s 00:39:44.454 17:04:35 -- pm/common@21 -- $ date +%s 00:39:44.454 17:04:35 -- pm/common@21 -- $ date +%s 00:39:44.454 17:04:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727795075 00:39:44.454 17:04:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727795075 00:39:44.454 17:04:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727795075 00:39:44.454 17:04:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727795075 00:39:44.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727795075_collect-cpu-load.pm.log 00:39:44.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727795075_collect-vmstat.pm.log 00:39:44.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727795075_collect-cpu-temp.pm.log 00:39:44.454 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727795075_collect-bmc-pm.bmc.pm.log 00:39:45.396 17:04:36 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:39:45.396 17:04:36 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:39:45.396 17:04:36 -- spdk/autopackage.sh@14 -- $ timing_finish 00:39:45.396 17:04:36 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:45.396 17:04:36 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:45.396 17:04:36 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:45.396 17:04:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:45.396 17:04:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:45.396 17:04:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:45.396 17:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:45.396 17:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:45.396 17:04:36 -- pm/common@44 -- $ pid=3013039 00:39:45.396 17:04:36 -- pm/common@50 -- $ kill -TERM 3013039 00:39:45.396 17:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:45.396 17:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:45.396 17:04:36 -- pm/common@44 -- $ pid=3013040 00:39:45.396 17:04:36 -- pm/common@50 -- $ kill -TERM 3013040 00:39:45.396 17:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:45.396 17:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:45.396 17:04:36 -- pm/common@44 -- $ pid=3013041 00:39:45.396 17:04:36 -- pm/common@50 -- $ kill -TERM 3013041 00:39:45.396 17:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:45.396 17:04:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:45.396 17:04:36 -- pm/common@44 -- $ pid=3013066 00:39:45.396 17:04:36 -- pm/common@50 -- $ sudo -E kill -TERM 3013066 00:39:45.396 + [[ -n 2390735 ]] 00:39:45.396 + sudo kill 2390735 00:39:45.406 [Pipeline] } 00:39:45.422 [Pipeline] // stage 00:39:45.428 [Pipeline] } 00:39:45.445 [Pipeline] // timeout 00:39:45.451 [Pipeline] } 00:39:45.467 [Pipeline] // catchError 00:39:45.474 [Pipeline] } 00:39:45.490 [Pipeline] // wrap 00:39:45.496 [Pipeline] } 00:39:45.512 [Pipeline] // catchError 00:39:45.521 [Pipeline] stage 00:39:45.524 [Pipeline] { (Epilogue) 00:39:45.538 [Pipeline] catchError 00:39:45.540 [Pipeline] { 00:39:45.553 [Pipeline] echo 00:39:45.555 Cleanup processes 00:39:45.561 [Pipeline] sh 00:39:45.913 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:45.913 3013180 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:45.913 3013677 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:45.928 [Pipeline] sh 00:39:46.216 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.216 ++ grep -v 'sudo pgrep' 00:39:46.216 ++ awk '{print $1}' 00:39:46.216 + sudo kill -9 3013180 00:39:46.230 [Pipeline] sh 00:39:46.518 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:01.437 [Pipeline] sh 00:40:01.724 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:01.724 Artifacts sizes are good 00:40:01.739 [Pipeline] archiveArtifacts 00:40:01.746 Archiving artifacts 00:40:01.926 [Pipeline] sh 00:40:02.214 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:02.230 [Pipeline] cleanWs 00:40:02.240 [WS-CLEANUP] Deleting project workspace... 00:40:02.240 [WS-CLEANUP] Deferred wipeout is used... 00:40:02.247 [WS-CLEANUP] done 00:40:02.249 [Pipeline] } 00:40:02.266 [Pipeline] // catchError 00:40:02.279 [Pipeline] sh 00:40:02.567 + logger -p user.info -t JENKINS-CI 00:40:02.577 [Pipeline] } 00:40:02.588 [Pipeline] // stage 00:40:02.592 [Pipeline] } 00:40:02.603 [Pipeline] // node 00:40:02.608 [Pipeline] End of Pipeline 00:40:02.634 Finished: SUCCESS